I am trying to put logs to AWS CloudWatch logs via AWS CLI using a bash script:
#!/bin/bash
EVENT_TIME=$(date +%s%3N)
LOG_LEVEL=6
EVENT_SOURCE=myapp
MESSAGE=1
OUTPUT=$(jq -n \
--arg EventTime "$EVENT_TIME" \
--arg LogLevel "$LOG_LEVEL" \
--arg EventSource "$EVENT_SOURCE" \
--arg Message "$MESSAGE" \
'{EventTime:$EventTime,LogLevel:$LogLevel,EventSource:$EventSource,Message:$Message}')
MESSAGE="$OUTPUT"
aws logs put-log-events --log-group-name test --log-stream-name local --log-events timestamp=$(date +%s%3N),message=$MESSAGE
but I am getting error:
Error parsing parameter '--log-events': Expected: '<double quoted>', received: '<none>'
for input:
timestamp=1654692489664,message="{
The command works fine, if I remove the JSON message to a simple string. It should be an issue with quoting but not sure where the problem is. Any idea?
The message parameter needs to be a string containing the json, not the direct json created with jq.
Something like this should work:
#!/bin/bash
EVENT_TIME=$(date +%s000)
LOG_LEVEL=6
EVENT_SOURCE=myapp
MESSAGE=1
OUTPUT=$(jq -n \
--arg EventTime "$EVENT_TIME" \
--arg LogLevel "$LOG_LEVEL" \
--arg EventSource "$EVENT_SOURCE" \
--arg Message "$MESSAGE" \
'{EventTime:$EventTime,LogLevel:$LogLevel,EventSource:$EventSource,Message:$Message}')
LOG_MESSAGE=$(echo $OUTPUT | sed 's/"/\\"/g')
aws logs put-log-events --log-group-name test --log-stream-name local --log-events timestamp=$(date +%s000),message=\""$LOG_MESSAGE"\"
Also, if you plan to use the put-log-events like this, you will need to provide the --sequence-token for consecutive puts. See here: https://docs.aws.amazon.com/cli/latest/reference/logs/put-log-events.html
Might be best to setup CloudWatch agent to publish the logs.
Related
Normally, to test my aws lambda functions, Id do this:
IMG_FILE='/some/image/loc//img.jpg' &&\
jo img=%"$IMG_FILE" | curl -X POST -H 'Content-Type: application/json' -d #- "$LAMBDA_HOST" >> output.bin
Id like to replicate the same with aws lambda invoke function like so:
IMG_FILE='/some/image/loc//img.jpg' &&\
jo img=%"$IMG_FILE" |aws lambda invoke --payload XXXXXX --function-name funky_func output.bin
How do I pass the jo output to the --payload param? :(
Oh, looking at the documentation, it appears that --payload can take a file:// parameter. So you can do:
IMG_FILE='/some/image/loc/img.jpg' &&
jo img=%"$IMG_FILE" > img.json &&
aws lambda invoke --payload file://img.json \
--function-name funky_func output.bin
...but it's possible you'll hit some limitation on command line length depending on the size of your image.
It's possible -- but I'm unable to test this at the moment -- that if you were to provide a file argument of file:///dev/stdin that you could provide the JSON on stdin. Maybe worth a try.
Issue
I am trying to use the Gitlab yaml API linting tool on an enterprise instance of Gitlab. However, I am getting an empty response (not just an empty json object, like absolutely zero output).
Steps to duplicate
I am using a stripped version of the sample .yaml file shown on the gitlab CI/CD tutorial page. The file is shown here:
build-job:
stage: build
script:
- echo "Hello, $GITLAB_USER_LOGIN!"
deploy-prod:
stage: deploy
script:
- echo "This job deploys something from the $CI_COMMIT_BRANCH branch."
I am using the 1 line curl command as shown on the CI Linting API page.
If I use the command as given (replacing only the filename), I get
$ jq --null-input --arg yaml "$(<.gitlab-ci.yml)" '.content=$yaml' \
| curl "https://gitlab.mycompany.com/api/v4/ci/lint?include_merged_yaml=true" \
--header 'Content-Type: application/json' \
--data #-
I get the output {"message":"401 Unauthorized"}, which is to be expected as the API call requires an API key. I generate an API key in my profile and try again:
$ export TOKEN='xxxxxxxxxx'
$ jq --null-input --arg yaml "$(<.gitlab-ci.yml)" '.content=$yaml' \
| curl "https://gitlab.mycompany.com/api/v4/ci/lint?include_merged_yaml=true" \
--header "Content-Type: application/json PRIVATE-TOKEN=${TOKEN}" \
--data #-
When I run this, the output shows nothing. This is confirmed by a pipe to wc -c which outputs 0.
My expected output should be:
{
"status": "valid",
"errors": [],
"warnings": []
}
Questions:
Why is no response a result of me using my valid API key (This is with a newly generated key)?
How can I fix this, and receive the expected output shown above?
Make sure your token as the api scope, as illustrated here.
Without that scope, you would get a 401 Unauthorized, which would not be parsed parsed by jq at all.
hi I tried to check connection to eventhub using kafkacat in one of my VM in azure
I gave the following parametrs(fill my hab name and all):
kafkacat \
-b <your-hub-name>.servicebus.windows.net:9092 \
-X security.protocol=sasl_ssl \
-X sasl.mechanism=PLAIN \
-X sasl.username='$ConnectionString' \
-X sasl.password='Endpoint=sb://<your-hub-name>.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=<primary-key>' \
-L
but I keep getting:
% ERROR: Failed to acquire metadata: Local: Broker transport failure
what can go wrong here ? do I have to create a topic and SAS authentication and take his keys?
The port of eventhub with kafka protocol is 9093.
https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-kafka-connect-tutorial
bootstrap.servers={YOUR.EVENTHUBS.FQDN}:9093 # e.g. namespace.servicebus.windows.net:9093
I'm trying to send some simple message with mosquitto_pub to Azure IoT HUB but faced some problems with authorization. I'm using following script:
mosquitto_pub \
-h xxxdev.azure-devices.net \
-u "xxxdev.azure-devices.net/xxxdev/?api-version=2018-06-30" \
-P "SharedAccessSignature sr=xxx.azure-
devices.net%2Fdevices%2Fxxxdev&sig=YYYYY&se=1570866689&skn=ZZZZZZZ" \
-t "devices/xxxdev/messages/events/" \
--cafile ca.pem \
-p 8883 \
-i xxxdev \
-V mqttv311 \
-d \
-m 'message'
and after run this script I get following messages:
Client xxxdev sending CONNECT
Client xxxdev received CONNACK (5)
Connection error: Connection Refused: not authorised.
Client xxxdev sending DISCONNECT
My questions are: What exactly does those messages mean? Is it because some parameter like password (given with -P param) is wrong?
I've generated SAS token with bash script: https://learn.microsoft.com/en-us/rest/api/eventhub/generate-sas-token
Assuming that this bash script generates properly the password - what else could be the problem here? How to fix the problem?
I am almost there, the "$i" is where I am having trouble. I have tried ${i}, "$i", $i. I am sure someone with more experience can help me here I have been working on this for 1 full day. Driving me nuts.
session_name="Some-sesh_name"
profile_name="ephemeral-${account_id}-${profile_path}-`date +%Y%m%d%H%M%S`"
roles=( "arn:aws:iam::11111111111111:role/role_name" "arn:aws:iam::222222222222:role/role_name" )
sts=( $(
aws sts assume-role \
--role-arn "$i" \
--role-session-name "$session_name" \
--query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' \
--output text
) )
for i in "${roles[#]}";
do $sts ; done
aws configure set aws_access_key_id ${sts[0]} --profile ${profile_name}
aws configure set aws_secret_access_key ${sts[1]} --profile ${profile_name}
aws configure set aws_session_token ${sts[2]} --profile ${profile_name}
That $i is expanded at the moment you define the sts array. After that, it doesn't exist.
To make that aws command reusable, use a function:
roles=(
"arn:aws:iam::11111111111111:role/role_name"
"arn:aws:iam::222222222222:role/role_name"
)
sts() {
aws sts assume-role \
--role-arn "$1" \
--role-session-name "$session_name" \
--query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' \
--output text
}
for role in "${roles[#]}"; do
sts "$role"
done
Note the use of $1 in the function, to retrieve the first argument. The global variable $session_name is still OK
I don't understand what you're thinking with the sts array. In the for loop you want to call it as a command, but the configure commands take elements of the array? After all the roles have been assumed? Are you wanting to use the returned data instead?
Do you want:
for role in "${roles[#]}"; do
data=( $(sts "$role") )
aws configure set aws_access_key_id "${data[0]}" --profile "$profile_name"
aws configure set aws_secret_access_key "${data[1]}" --profile "$profile_name"
aws configure set aws_session_token "${data[2]}" --profile "$profile_name"
done
?