I have a requirement to monitor for a specific file in the /mnt/file-i-need-to-monitor.txt path where I need to;
Create alarms if the file doesn't exist anymore.
if [ ! -f /mnt/file-i-need-to-monitor.txt ]; then
// create aws alarm and notify via an email
fi
How can I integrate this methodology?
I have looked into the aws log agent but it seems like it is for pushing any custom logs to a log group.
Can someone help me fix this?
Hello Jananath Banuka,
For your case, you can use aws cli to push a custom metric,
And you create an alert based on the console if this custom metric if you have > 1
https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/put-metric-data.html
Related
I'm new to AWS CLI and having trouble getting the tail command to work. I'm using version 2. I try the syntax
aws logs tail group_name /aws/lambda/schedule-jobs --since 1h
and it gives me the error
unknown options: /aws/lambda/schedule-jobs
If I try it without the group_name it tells me the group_name is a required parameter (of course).
If I try it with another log group name it tells me the same error message (with the different log group name).
If I put quotes around the name it gives the same error message.
I know this must be a very simple scenario but I can't find any working examples on Google search or in other SO entries. What am I doing wrong?
Yes, when I call
aws logs describe-log-groups
my log group is there and spelled correctly. I'm in the right account and region.
Yes, I am an admin on the account and have full access to the logs.
Just write aws logs tail /aws/lambda/schedule-jobs --since 1h
aws logs tail command doesn't work on aws cli v2.
You can use the following command to access the logs using v2 API.
aws logs get-log-events --log-group-name <log-group-name> --log-stream-name <log-stream-name> --limit=1000
I have a Node.js app running in Elastic Beanstalk and logging using console.log, console.error etc...then I have CloudWatch logs turned on. When I go to the Insights and do a query it shows up but somehow it is logging line by line instead of the entire error.
In the example screenshot I want the entire log from a single console.log to go to a single log record...so one to one, instead of splitting by new lines...is there a way to do this without removing all line breaks during console.log? Say a configuration option or something?
The output of the application is sent to standard out (stdout) and standard error (stderr). The AWS Elastic Beanstalk environment leverages Linux rsyslog to capture stdout and stderr to write the information into log files.
This is done through standard rsyslog configuration found here: /etc/rsyslog.d/web.conf
if $programname == 'web' then {
*.=warning;*.=err;*.=crit;*.=alert;*.=emerg /var/log/web.stderr.log
*.=info;*.=notice /var/log/web.stdout.log
}
It is rsyslog that interprets the stack trace from stdout as multiple entries and writes multiple lines in AWS CloudWatch Logs.
I wrote a small article on GitHub that describe the solution for a Java environment but you can do something similar for Node.JS.
I decided to post this question because, I have ran out of debugging ideas, just ideas are golden since I know it can be difficult to help debugging a virtual instance through here (debugging code is hard enough jaja). Anyway, I have created a virtual machine in Compute engine , I created a logs file that I populate, for example, with this command in a python script, let's call it logging.py:
import logging
logging.basicConfig(filename= 'app.log' , level = logging.INFO , format = ' %(asctime)s - %(name) - %(levelname)s - %(message)s')
logging.info('Some message ' + str(type(variable)))
everytime I use python3 logging.py , the app.log is effectively populated. ( Logging.py and app.log are in the same directory the /home/username/ folder )
I want stackdriver to show this log in the logging viewer everytime it's written, so , I installed the stackdriver agent as follows, in the virtual machine command line:
$ curl -sSO https://dl.google.com/cloudagents/install-logging-agent.sh
$ sudo bash install-logging-agent.sh
No errors that I see are delivered here, in fact, you can see here the messages obtained
Messags on the stackdriver viewer:
After this, I proceed to create a .conf file that I create in /etc/google-fluentd/config.d/app.conf
with this parameters
<source>
type tail
format none
path /home/username/app.log
pos_file /var/lib/google-fluentd/pos/app.pos
read_from_head true
tag whatever-tag
</source>
After that is created, I launch sudo service google-fluentd restart.
Aftert I execute, python3 logging.py , no logs are added to stack drivers logging viewer.
So, where might Have I gone wrong?
Things I have tried/checked:
-Have more than 13 gygabytes of RAM available
-If I run logger "some message" on the command line, I effectively add a log with "some message" to the log viewer
-If I run
ps ax | grep fluentd
I obtain :
3033 ? Sl 0:09 /opt/google-fluentd/embedded/bin/ruby /usr/sbin/google-fluentd --log /var/log/google-fluentd/google-fluentd.log --no-supervisor
3309 pts/0 S+ 0:00 grep --color=auto fluentd
-Both my user, and the service account I use, have logger admin permission in IAM roles.
-This is the documentation I have based myself on:
https://cloud.google.com/logging/docs/agent/troubleshooting?hl=es-419
https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list?hl=es-419
https://cloud.google.com/logging/docs/agent/configuration?hl=es-419
https://medium.com/google-cloud/how-to-log-your-application-on-google-compute-engine-6600d81e70e3
https://cloud.google.com/logging/docs/agent/installation
-If I run sudo service google-fluentd status , the agent appears active.
-My instance hass access, to all the apis. It's an n1-standard-4 (4 vCPUs, 15 GB of memory) using ubuntu linux 18:04
So, what else can I check to debug this? I'm out of ideas here , hope I'm not being an idiot here :(
Based on my understanding, I think that you looking for the following fluentd resource types:
generic_node
“A generic node identifies a machine or other computational resource for which no more specific resource type is applicable. The label values must uniquely identify the node.”
generic_task
“A generic task identifies an application process for which no more specific resource is applicable, such as a process scheduled by a custom orchestration system. The label values must uniquely identify the task.”
The source of my information has been found here
This document explain how to send logs from your application in different ways:
Cloud Logging API
Cloud Logging Agent
Generic fluentd
As you mentioned having installed fluentd, let me provide more focused documentation about Cloud Logging Agent. I also found some python Client Library documentation that you may be interested.
Finally, I found a nginx/apache use-case guide that you may use as reference.
For some reason, if I change the directory to which both the .conf file points, and the directory where the logg is to /var/logs/ , being the final path as /var/logs/app.logs, it does work correctly. Possibly there is a configuration issue, causing the logging agent to only capture logs in specific predetermined folders, or a permissions issue that stops it from working if the log is in the username directory.
I found this solution, however, by chance(random testing basically.
). Did not find anything in the main articles that are supposed to teach me how to configure the logging agent, that could point me in the right direction, being those articles this ones,
https://cloud.google.com/logging/docs/agent/troubleshooting?hl=es-419 https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list?hl=es-419 https://cloud.google.com/logging/docs/agent/configuration?hl=es-419 https://medium.com/google-cloud/how-to-log-your-application-on-google-compute-engine-6600d81e70e3 https://cloud.google.com/logging/docs/agent/installation
If I needed it to work in my username directory, it's not clear just by checking this articles how to do it,what configuration file I would need to change or where to start, so I recommend to google to improve that aspect of the docs.
This documentation you have sent https://docs.fluentd.org/quickstart is pretty interesting, maybe I can find the explanation there, thank you for your help.
I am trying to include in my CI/CD development the update of the script_location and only this parameter. AWS is asking me to include the required parameters such as RoleArn. How can I only update the part of the job configuration I want to change ?
This is what I am trying to use
aws glue update-job --job-name <job_name> --job-update Command="{ScriptLocation=s3://<s3_path_to_script>}
This is what happens :
An error occurred (InvalidInputException) when calling the UpdateJob operation: Command name should not be null or empty.
If I add the default Command Name glueetl, this is what happens :
An error occurred (InvalidInputException) when calling the UpdateJob operation: Role should not be null or empty.
An easy way to update via CLI a glue-job or a glue-trigger is using --cli-input-json option. In order to use correct json you could use aws glue update-job --generate-cli-skeleton what returns a complete structure to insert your changes.
EX:
{"JobName":"","JobUpdate":{"Description":"","LogUri":"","Role":"","ExecutionProperty":{"MaxConcurrentRuns":0},"Command":{"Name":"","ScriptLocation":"","PythonVersion":""},"DefaultArguments":{"KeyName":""},"NonOverridableArguments":{"KeyName":""},"Connections":{"Connections":[""]},"MaxRetries":0,"AllocatedCapacity":0,"Timeout":0,"MaxCapacity":null,"WorkerType":"G.1X","NumberOfWorkers":0,"SecurityConfiguration":"","NotificationProperty":{"NotifyDelayAfter":0},"GlueVersion":""}}
Well here just fill the name of the job and change the options.
After this you have to transform your json into a one-line json and send into the command using ' '
aws glue update-job --cli-input-json '<one-line-json>'
I hope help someone with this problem too.
Ref:
https://docs.aws.amazon.com/cli/latest/reference/glue/update-job.html
https://w3percentagecalculator.com/json-to-one-line-converter/
I don't know whether you've solved this problem, but I managed using this command:
aws glue update-job --job-name <gluejobname> --job-update Role=myRoleNameBB,Command="{Name=<someupdatename>,ScriptLocation=<local_filename.py>}"
You don't need the the ARN of the role, rather the role name. The example above assumes that you have a role with the name myRoleNameBB and it has access to AWS Glue.
Note: I used a local file on my laptop. Also, the "Name" in "Command" part is also compulsory.
When I run it I go this output:
{
"JobName": "<gluejobname>"
}
Based on what I have found, there is no way to update just part of the job using the update-job API.
I ran into the same issue and I provided the role to get past this error. The command worked but the update-job API actually resets other parameters to defaults such as Type of application, Job Language,Class, Timeout, Max Capacity, etc.
So if your pre-existing job is a Spark Application in scala, it will fail as AWS defaults to Python Shell and python as job language as part of the update-job API. And this API provides no way to set job Language type to scala and set a main class (required in case of scala). It provides a way to set the application type to Spark application.
If you do not want to specify the Role to the update-job API. One approach is to copy the new script with the same name and same location that your pre-existing ETL job uses and then trigger your ETL using start-job API as part of the CI process.
Second approach is to run your ETL directly and force it to use the latest script in the start-job API call:
aws glue start-job-run --job-name <job-name> --arguments=scriptLocation="<path to your latest script>"
The only caveat with the second approach is when you look in the console the ETL job will still be referencing the old script Location. The above command just forces this run of the job to use the latest script which you can confirm by looking in the History tab on the Glue ETL console.
We are logging data to cloudwatch logs everyday. I would like to push this to S3 batch wise every hour/day.
Is there any existing lambda libs available in nodejs to achieve this?
Logwatch
Configure logwatch on logwatch.config (where you can configure mailTo & mailFrom.
Run logwatch manually
sudo logwatch --detail high --mailto testmail#mailinator.com --service all --range all
OR
Using Winston / winston-daily-rotate-file, a versatile logging library for Node.js