How to use "AWS logs tail" - syntax question - aws-cli

I'm new to AWS CLI and having trouble getting the tail command to work. I'm using version 2. I try the syntax
aws logs tail group_name /aws/lambda/schedule-jobs --since 1h
and it gives me the error
unknown options: /aws/lambda/schedule-jobs
If I try it without the group_name it tells me the group_name is a required parameter (of course).
If I try it with another log group name it tells me the same error message (with the different log group name).
If I put quotes around the name it gives the same error message.
I know this must be a very simple scenario but I can't find any working examples on Google search or in other SO entries. What am I doing wrong?
Yes, when I call
aws logs describe-log-groups
my log group is there and spelled correctly. I'm in the right account and region.
Yes, I am an admin on the account and have full access to the logs.

Just write aws logs tail /aws/lambda/schedule-jobs --since 1h

aws logs tail command doesn't work on aws cli v2.
You can use the following command to access the logs using v2 API.
aws logs get-log-events --log-group-name <log-group-name> --log-stream-name <log-stream-name> --limit=1000

Related

EC2 instance running S3 Sync command terminates before data transfer is complete

I have an EC2 instance running Linux. This instance is used to run aws s3 commands.
I want to sync the last 6 months worth of data from source to target S3 buckets. I am using credentials with the necessary permissions to do this.
Initially I just ran the command:
aws s3 sync "s3://source" "s3://target" --query "Contents[?LastModified>='2022-08-11' && LastModified<='2023-01-11']"
However, after maybe 10 mins this command stops running, and only a fraction of the data is synced.
I thought this was because my SSM session was terminating, and with it the command stopped executing.
To combat this, I used the following command to try and ensure that this command would continue to execute even after my SSM terminal session was closed:
nohup aws s3 sync "s3://source" "s3://target" --query "Contents[?LastModified>='2022-08-11' && LastModified<='2023-01-11']" --exclude "*.log" --exclude "*.bak" &
Checking the status of the EC2 instance, the command appears to run for about 20 mins, before clearly stopping for some reason.
The --query parameter controls what information is displayed in the response from an API call.
It does not control which files are copied in an aws s3 sync command. The documentation for aws s3 sync defines the --query parameter as: "A JMESPath query to use in filtering the response data."
Your aws s3 sync command will be synchronizing ALL files unless you use Exclude and Include Filters. These filters operate on the name of the object. It is not possible to limit the sync command by supplying date ranges.
I cannot comment on why the command would stop running before it is complete. I suggest you redirect output to a log file and then review the log file for any clues.

CloudWatch Alarms based on file existence in Ec2

I have a requirement to monitor for a specific file in the /mnt/file-i-need-to-monitor.txt path where I need to;
Create alarms if the file doesn't exist anymore.
if [ ! -f /mnt/file-i-need-to-monitor.txt ]; then
// create aws alarm and notify via an email
fi
How can I integrate this methodology?
I have looked into the aws log agent but it seems like it is for pushing any custom logs to a log group.
Can someone help me fix this?
Hello Jananath Banuka,
For your case, you can use aws cli to push a custom metric,
And you create an alert based on the console if this custom metric if you have > 1
https://docs.aws.amazon.com/cli/latest/reference/cloudwatch/put-metric-data.html

View AWS Glue job logs with CLI

The AWS CLI is very good for managing AWS Glue jobs. But if a job fails, I may not see anything more useful than something like:
"JobRunState": "FAILED",
"ErrorMessage": "User application exited with status 10",
And I have to go through the mountain of CloudWatch logs hoping to find something useful. Would appreciate any ideas on getting all logs through the CLI so I can use things like grep.
Found this question while looking for the answer myself. The following command gets the logs for the last job,
JOB_ID=$(aws glue get-job-runs --job-name $JOB_NAME --query 'JobRuns[0].Id' --output text)
aws logs get-log-events --log-group-name /aws-glue/jobs/output --log-stream-name $JOB_ID
Where $JOB_NAME is the name of your Glue job. You can also use the log group name /aws-glue/jobs/error to see messages written to stderr though I've found /output more useful.

stackdriver logging agent not showing logs read from a custom log file in stackdriver logging viewer on Google cloud platform

I decided to post this question because, I have ran out of debugging ideas, just ideas are golden since I know it can be difficult to help debugging a virtual instance through here (debugging code is hard enough jaja). Anyway, I have created a virtual machine in Compute engine , I created a logs file that I populate, for example, with this command in a python script, let's call it logging.py:
import logging
logging.basicConfig(filename= 'app.log' , level = logging.INFO , format = ' %(asctime)s - %(name) - %(levelname)s - %(message)s')
logging.info('Some message ' + str(type(variable)))
everytime I use python3 logging.py , the app.log is effectively populated. ( Logging.py and app.log are in the same directory the /home/username/ folder )
I want stackdriver to show this log in the logging viewer everytime it's written, so , I installed the stackdriver agent as follows, in the virtual machine command line:
$ curl -sSO https://dl.google.com/cloudagents/install-logging-agent.sh
$ sudo bash install-logging-agent.sh
No errors that I see are delivered here, in fact, you can see here the messages obtained
Messags on the stackdriver viewer:
After this, I proceed to create a .conf file that I create in /etc/google-fluentd/config.d/app.conf
with this parameters
<source>
type tail
format none
path /home/username/app.log
pos_file /var/lib/google-fluentd/pos/app.pos
read_from_head true
tag whatever-tag
</source>
After that is created, I launch sudo service google-fluentd restart.
Aftert I execute, python3 logging.py , no logs are added to stack drivers logging viewer.
So, where might Have I gone wrong?
Things I have tried/checked:
-Have more than 13 gygabytes of RAM available
-If I run logger "some message" on the command line, I effectively add a log with "some message" to the log viewer
-If I run
ps ax | grep fluentd
I obtain :
3033 ? Sl 0:09 /opt/google-fluentd/embedded/bin/ruby /usr/sbin/google-fluentd --log /var/log/google-fluentd/google-fluentd.log --no-supervisor
3309 pts/0 S+ 0:00 grep --color=auto fluentd
-Both my user, and the service account I use, have logger admin permission in IAM roles.
-This is the documentation I have based myself on:
https://cloud.google.com/logging/docs/agent/troubleshooting?hl=es-419
https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list?hl=es-419
https://cloud.google.com/logging/docs/agent/configuration?hl=es-419
https://medium.com/google-cloud/how-to-log-your-application-on-google-compute-engine-6600d81e70e3
https://cloud.google.com/logging/docs/agent/installation
-If I run sudo service google-fluentd status , the agent appears active.
-My instance hass access, to all the apis. It's an n1-standard-4 (4 vCPUs, 15 GB of memory) using ubuntu linux 18:04
So, what else can I check to debug this? I'm out of ideas here , hope I'm not being an idiot here :(
Based on my understanding, I think that you looking for the following fluentd resource types:
generic_node
“A generic node identifies a machine or other computational resource for which no more specific resource type is applicable. The label values must uniquely identify the node.”
generic_task
“A generic task identifies an application process for which no more specific resource is applicable, such as a process scheduled by a custom orchestration system. The label values must uniquely identify the task.”
The source of my information has been found here
This document explain how to send logs from your application in different ways:
Cloud Logging API
Cloud Logging Agent
Generic fluentd
As you mentioned having installed fluentd, let me provide more focused documentation about Cloud Logging Agent. I also found some python Client Library documentation that you may be interested.
Finally, I found a nginx/apache use-case guide that you may use as reference.
For some reason, if I change the directory to which both the .conf file points, and the directory where the logg is to /var/logs/ , being the final path as /var/logs/app.logs, it does work correctly. Possibly there is a configuration issue, causing the logging agent to only capture logs in specific predetermined folders, or a permissions issue that stops it from working if the log is in the username directory.
I found this solution, however, by chance(random testing basically.
). Did not find anything in the main articles that are supposed to teach me how to configure the logging agent, that could point me in the right direction, being those articles this ones,
https://cloud.google.com/logging/docs/agent/troubleshooting?hl=es-419 https://cloud.google.com/logging/docs/reference/v2/rest/v2/entries/list?hl=es-419 https://cloud.google.com/logging/docs/agent/configuration?hl=es-419 https://medium.com/google-cloud/how-to-log-your-application-on-google-compute-engine-6600d81e70e3 https://cloud.google.com/logging/docs/agent/installation
If I needed it to work in my username directory, it's not clear just by checking this articles how to do it,what configuration file I would need to change or where to start, so I recommend to google to improve that aspect of the docs.
This documentation you have sent https://docs.fluentd.org/quickstart is pretty interesting, maybe I can find the explanation there, thank you for your help.

Invalid endpoint on aws s3 ls

I've just installed AWS Command Line Interface on Windows 10 (64-bit). I ran 'aws configure' providing the two keys, a region value of us-east-1, and took the default json format. Then when I run 'aws s3 ls' I get the following error:
Invalid endpoint: https://s3..amazonaws.com
It's either not taking my region, or putting two dots where there should be one in the link. My /.aws/config file only has these lines in it:
[default]
region = us-east-1
Any ideas why I get 2 dots in place of my region in the s3 link, causing the invalid endpoint error? Thanks for any assistance.
You could also fix this by setting environment variables in your current session for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_DEFAULT_REGION.
(in this case your error is actually caused by the CLI not finding a value for the region)
I think that's not because of the region because you have already set default to us-east-1 as shown by your /.aws/config file but your output type is not set and double check your access key id and secret access key
it should be like:
also check whether you are able to call other AWS API services such as try to create a dynamodb table using AWS CLI and check whether your IAM user have access permissions to s3 or not.
This can happen if you set AWS_DEFAULT_REGION in your any of .bash_profile or .bashrc
Remove AWS_DEFAULT_REGION in any of those files.
~/.aws/config should be like this
[default]
output = json
region = eu-west-1
Restart your terminal.
You can also just specify the region on the command line itself like below if you didn't want to rely on the default settings in the ~/.aws/config file...
aws --region us-east-1 s3 cp s3://<bucket> <local destination>

Resources