Mongodb Logs flood - node.js

I am starting to use Mongodb to storage my data, and when I started the service, I am getting a log flood, I want to turn off this log saves, I don't mind if I do not have any kind of logs, it is development environment and need to do that, because my log file is growing more than 30gb by 2 or 3 days.
I've tried to change quiet to true as below, but with no success.
root#master:~# cat /etc/mongod.conf
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
verbosity: 0
destination: file
logAppend: true
###### HERE ######
quiet: true
path: /var/log/mongodb/mongod.log
# path: /dev/null
component:
accessControl:
verbosity: 1
command:
verbosity: 1
Any idea how to get a clean logs?? could be a logs with nothing.
Thank you!!

Mongo logs have a number of verbosity levels from 0 to 5.
0 is the quietest and
5 is the most verbose.
The default level is 0.
Wherever you are setting verbosity as 1, set it to 0.
You should check the log levels defined using -
db.getLogComponents()
This would give you the set log levels which you can modify to 0 and see if the logging changes.
db.setLogLevel(<verbosity>, <component>)
Where component can be one of - accessControl, command, control, geo, index, network, query, replication, storage, journal, write.

Related

Function Not implemented Error when creating Kubernetes Postgres Cluster

I am using postgres-operator version 1.8.2. I am trying to create a postgres cluster with Persistent volume on azure blob storage. I am facing issues while creating any cluster. Please find the error below:
2022-07-18 01:11:24,907 - bootstrapping - INFO - Figuring out my environment (Google? AWS? Openstack? Local?)
2022-07-18 01:11:24,924 - bootstrapping - INFO - No meta-data available for this provider
2022-07-18 01:11:24,925 - bootstrapping - INFO - Looks like your running unsupported
2022-07-18 01:11:25,011 - bootstrapping - INFO - Configuring bootstrap
2022-07-18 01:11:25,011 - bootstrapping - INFO - Configuring certificate
2022-07-18 01:11:25,011 - bootstrapping - INFO - Generating ssl self-signed certificate
2022-07-18 01:11:25,420 - bootstrapping - INFO - Configuring pgbouncer
2022-07-18 01:11:25,420 - bootstrapping - INFO - No PGBOUNCER_CONFIGURATION was specified, skipping
2022-07-18 01:11:25,420 - bootstrapping - INFO - Configuring patroni
2022-07-18 01:11:25,428 - bootstrapping - INFO - Writing to file /run/postgres.yml
2022-07-18 01:11:25,428 - bootstrapping - INFO - Configuring pam-oauth2
2022-07-18 01:11:25,429 - bootstrapping - INFO - Writing to file /etc/pam.d/postgresql
2022-07-18 01:11:25,429 - bootstrapping - INFO - Configuring log
2022-07-18 01:11:25,429 - bootstrapping - INFO - Configuring wal-e
2022-07-18 01:11:25,429 - bootstrapping - INFO - Configuring crontab
2022-07-18 01:11:25,429 - bootstrapping - INFO - Skipping creation of renice cron job due to lack of SYS_NICE capability
2022-07-18 01:11:25,430 - bootstrapping - INFO - Configuring pgqd
2022-07-18 01:11:25,430 - bootstrapping - INFO - Configuring standby-cluster
2022-07-18 01:11:26,966 WARNING: Kubernetes RBAC doesn't allow GET access to the 'kubernetes' endpoint in the 'default' namespace. Disabling 'bypass_api_service'.
2022-07-18 01:11:27,018 INFO: No PostgreSQL configuration items changed, nothing to reload.
2022-07-18 01:11:27,111 INFO: Lock owner: None; I am platform-postgres-ha-cluster-0
2022-07-18 01:11:27,227 INFO: trying to bootstrap a new cluster
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /home/postgres/pgdata/pgroot/data ... 2022-07-18T01:11:27.439948215Z ok
creating subdirectories ... 2022-07-18T01:11:30.354678043Z ok
selecting default max_connections ... 2022-07-18T01:11:30.719106299Z 100
selecting default shared_buffers ... 2022-07-18T01:11:30.913778725Z 128MB
selecting default timezone ... 2022-07-18T01:11:30.936790305Z Etc/UTC
selecting dynamic shared memory implementation ... 2022-07-18T01:11:30.937331209Z posix
creating configuration files ... 2022-07-18T01:11:31.421589304Z ok
running bootstrap script ... 2022-07-18T01:11:32.551608060Z LOG: could not link file "pg_xlog/xlogtemp.70" to "pg_xlog/000000010000000000000001": Function not implemented
FATAL: could not open file "pg_xlog/000000010000000000000001": No such file or directory
child process exited with exit code 1
initdb: removing contents of data directory "/home/postgres/pgdata/pgroot/data"
pg_ctl: database system initialization failed
2022-07-18 01:11:32,647 INFO: removing initialize key after failed attempt to bootstrap the cluster
2022-07-18 01:11:32,725 INFO: renaming data directory to /home/postgres/pgdata/pgroot/data_2022-07-18-01-11-32
My postgres-cluster.yaml is:
kind: "postgresql"
apiVersion: "acid.zalan.do/v1"
metadata:
name: "platform-postgres-ha-cluster"
namespace: "postgres"
labels:
team: platform
spec:
teamId: "platform"
postgresql:
version: "9.6"
numberOfInstances: 3
enableMasterLoadBalancer: true
enableReplicaLoadBalancer: true
volume:
storageClass: blob-fuse-retained #storageClass
subPath: postgres-1
size: "10Gi"
users:
test: []
databases:
test: test
allowedSourceRanges:
# IP ranges to access your cluster go here
# Your odd host IP
- 127.0.0.1/32
resources:
requests:
cpu: 100m
memory: 1Gi
limits:
cpu: 500m
memory: 5Gi
I am helm chart for installing the operator. My values.yml is:
image:
registry: registry.opensource.zalan.do
repository: acid/postgres-operator
tag: v1.8.2
pullPolicy: "IfNotPresent"
# Optionally specify an array of imagePullSecrets.
# Secrets must be manually created in the namespace.
# ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
# imagePullSecrets:
# - name: myRegistryKeySecretName
podAnnotations: {}
podLabels: {}
configTarget: "OperatorConfigurationCRD"
# JSON logging format
enableJsonLogging: false
# general configuration parameters
configGeneral:
# the deployment should create/update the CRDs
enable_crd_registration: true
# specify categories under which crds should be listed
crd_categories:
- "all"
# update only the statefulsets without immediately doing the rolling update
enable_lazy_spilo_upgrade: false
# set the PGVERSION env var instead of providing the version via postgresql.bin_dir in SPILO_CONFIGURATION
enable_pgversion_env_var: true
# start any new database pod without limitations on shm memory
enable_shm_volume: true
# enables backwards compatible path between Spilo 12 and Spilo 13+ images
enable_spilo_wal_path_compat: true
# etcd connection string for Patroni. Empty uses K8s-native DCS.
etcd_host: ""
# Select if setup uses endpoints (default), or configmaps to manage leader (DCS=k8s)
# kubernetes_use_configmaps: false
# Spilo docker image
docker_image: registry.opensource.zalan.do/acid/spilo-14:2.1-p6
# min number of instances in Postgres cluster. -1 = no limit
min_instances: -1
# max number of instances in Postgres cluster. -1 = no limit
max_instances: -1
# period between consecutive repair requests
repair_period: 5m
# period between consecutive sync requests
resync_period: 30m
# can prevent certain cases of memory overcommitment
# set_memory_request_to_limit: false
# map of sidecar names to docker images
# sidecar_docker_images:
# example: "exampleimage:exampletag"
# number of routines the operator spawns to process requests concurrently
workers: 8
# parameters describing Postgres users
configUsers:
# roles to be granted to database owners
# additional_owner_roles:
# - cron_admin
# enable password rotation for app users that are not database owners
enable_password_rotation: false
# rotation interval for updating credentials in K8s secrets of app users
password_rotation_interval: 600000
# retention interval to keep rotation users
password_rotation_user_retention: 600000
# postgres username used for replication between instances
replication_username: standby
# postgres superuser name to be created by initdb
super_username: postgres
configMajorVersionUpgrade:
# "off": no upgrade, "manual": manifest triggers action, "full": minimal version violation triggers too
major_version_upgrade_mode: "off"
# upgrades will only be carried out for clusters of listed teams when mode is "off"
# major_version_upgrade_team_allow_list:
# - acid
# minimal Postgres major version that will not automatically be upgraded
minimal_major_version: "9.6"
# target Postgres major version when upgrading clusters automatically
target_major_version: "14"
configKubernetes:
# list of additional capabilities for postgres container
# additional_pod_capabilities:
# - "SYS_NICE"
# default DNS domain of K8s cluster where operator is running
cluster_domain: cluster.local
# additional labels assigned to the cluster objects
cluster_labels:
application: spilo
# label assigned to Kubernetes objects created by the operator
cluster_name_label: postgres-ha
# additional annotations to add to every database pod
# custom_pod_annotations:
# keya: valuea
# keyb: valueb
# key name for annotation that compares manifest value with current date
# delete_annotation_date_key: "delete-date"
# key name for annotation that compares manifest value with cluster name
# delete_annotation_name_key: "delete-clustername"
# list of annotations propagated from cluster manifest to statefulset and deployment
# downscaler_annotations:
# - deployment-time
# - downscaler/*
# allow user secrets in other namespaces than the Postgres cluster
enable_cross_namespace_secret: false
# enables initContainers to run actions before Spilo is started
enable_init_containers: true
# toggles pod anti affinity on the Postgres pods
enable_pod_antiaffinity: false
# toggles PDB to set to MinAvailabe 0 or 1
enable_pod_disruption_budget: true
# enables sidecar containers to run alongside Spilo in the same pod
enable_sidecars: true
# annotations to be ignored when comparing statefulsets, services etc.
# ignored_annotations:
# - k8s.v1.cni.cncf.io/network-status
# namespaced name of the secret containing infrastructure roles names and passwords
# infrastructure_roles_secret_name: postgresql-infrastructure-roles
# list of annotation keys that can be inherited from the cluster manifest
# inherited_annotations:
# - owned-by
# list of label keys that can be inherited from the cluster manifest
# inherited_labels:
# - application
# - environment
# timeout for successful migration of master pods from unschedulable node
# master_pod_move_timeout: 20m
# set of labels that a running and active node should possess to be considered ready
# node_readiness_label:
# status: ready
# defines how nodeAffinity from manifest should be merged with node_readiness_label
# node_readiness_label_merge: "OR"
# namespaced name of the secret containing the OAuth2 token to pass to the teams API
# oauth_token_secret_name: postgresql-operator
# defines the template for PDB (Pod Disruption Budget) names
pdb_name_format: "postgres-{cluster}-pdb"
# override topology key for pod anti affinity
pod_antiaffinity_topology_key: "kubernetes.io/hostname"
# namespaced name of the ConfigMap with environment variables to populate on every pod
pod_environment_configmap: "postgres-pod-config"
# name of the Secret (in cluster namespace) with environment variables to populate on every pod
pod_environment_secret: "postgres-pod-secrets"
# specify the pod management policy of stateful sets of Postgres clusters
pod_management_policy: "ordered_ready"
# label assigned to the Postgres pods (and services/endpoints)
pod_role_label: spilo-role
# service account definition as JSON/YAML string to be used by postgres cluster pods
# pod_service_account_definition: ""
# role binding definition as JSON/YAML string to be used by pod service account
# pod_service_account_role_binding_definition: ""
# Postgres pods are terminated forcefully after this timeout
pod_terminate_grace_period: 5m
# template for database user secrets generated by the operator,
# here username contains the namespace in the format namespace.username
# if the user is in different namespace than cluster and cross namespace secrets
# are enabled via `enable_cross_namespace_secret` flag in the configuration.
secret_name_template: "{username}.{cluster}.credentials.{tprkind}.{tprgroup}"
# set user and group for the spilo container (required to run Spilo as non-root process)
# spilo_runasuser: 101
# spilo_runasgroup: 103
# group ID with write-access to volumes (required to run Spilo as non-root process)
# spilo_fsgroup: 103
# whether the Spilo container should run in privileged mode
spilo_privileged: false
# whether the Spilo container should run with additional permissions other than parent.
# required by cron which needs setuid
spilo_allow_privilege_escalation: true
# storage resize strategy, available options are: ebs, pvc, off
storage_resize_mode: pvc
# pod toleration assigned to instances of every Postgres cluster
# toleration:
# key: db-only
# operator: Exists
# effect: NoSchedule
# operator watches for postgres objects in the given namespace
watched_namespace: "*" # listen to all namespaces
# configure resource requests for the Postgres pods
configPostgresPodResources:
# CPU limits for the postgres containers
default_cpu_limit: "2"
# CPU request value for the postgres containers
default_cpu_request: 1000m
# memory limits for the postgres containers
default_memory_limit: 5Gi
# memory request value for the postgres containers
default_memory_request: 100Mi
# hard CPU minimum required to properly run a Postgres cluster
min_cpu_limit: 250m
# hard memory minimum required to properly run a Postgres cluster
min_memory_limit: 250Mi
# timeouts related to some operator actions
configTimeouts:
# interval between consecutive attempts of operator calling the Patroni API
patroni_api_check_interval: 20s
# timeout when waiting for successful response from Patroni API
patroni_api_check_timeout: 20s
# timeout when waiting for the Postgres pods to be deleted
pod_deletion_wait_timeout: 10m
# timeout when waiting for pod role and cluster labels
pod_label_wait_timeout: 10m
# interval between consecutive attempts waiting for postgresql CRD to be created
ready_wait_interval: 20s
# timeout for the complete postgres CRD creation
ready_wait_timeout: 30s
# interval to wait between consecutive attempts to check for some K8s resources
resource_check_interval: 10s
# timeout when waiting for the presence of a certain K8s resource (e.g. Sts, PDB)
resource_check_timeout: 10m
# configure behavior of load balancers
configLoadBalancer:
# DNS zone for cluster DNS name when load balancer is configured for cluster
db_hosted_zone: db.example.com
# annotations to apply to service when load balancing is enabled
# custom_service_annotations:
# keyx: valuez
# keya: valuea
# toggles service type load balancer pointing to the master pod of the cluster
enable_master_load_balancer: true
# toggles service type load balancer pointing to the master pooler pod of the cluster
enable_master_pooler_load_balancer: false
# toggles service type load balancer pointing to the replica pod of the cluster
enable_replica_load_balancer: true
# toggles service type load balancer pointing to the replica pooler pod of the cluster
enable_replica_pooler_load_balancer: true
# define external traffic policy for the load balancer
external_traffic_policy: "Cluster"
# defines the DNS name string template for the master load balancer cluster
master_dns_name_format: "{cluster}.{team}.{hostedzone}"
# defines the DNS name string template for the replica load balancer cluster
replica_dns_name_format: "{cluster}-repl.{team}.{hostedzone}"
# options to aid debugging of the operator itself
configDebug:
# toggles verbose debug logs from the operator
debug_logging: true
# toggles operator functionality that require access to the postgres database
enable_database_access: true
# parameters affecting logging and REST API listener
configLoggingRestApi:
# REST API listener listens to this port
api_port: 8080
# number of entries in the cluster history ring buffer
cluster_history_entries: 1000
# number of lines in the ring buffer used to store cluster logs
ring_log_lines: 100
# configure interaction with non-Kubernetes objects from AWS or GCP
configAwsOrGcp:
# Additional Secret (aws or gcp credentials) to mount in the pod
# additional_secret_mount: "some-secret-name"
# Path to mount the above Secret in the filesystem of the container(s)
# additional_secret_mount_path: "/some/dir"
# AWS region used to store ESB volumes
aws_region: eu-central-1
# enable automatic migration on AWS from gp2 to gp3 volumes
enable_ebs_gp3_migration: false
# defines maximum volume size in GB until which auto migration happens
# enable_ebs_gp3_migration_max_size: 1000
# GCP credentials that will be used by the operator / pods
# gcp_credentials: ""
# AWS IAM role to supply in the iam.amazonaws.com/role annotation of Postgres pods
# kube_iam_role: ""
# S3 bucket to use for shipping postgres daily logs
# log_s3_bucket: ""
# S3 bucket to use for shipping WAL segments with WAL-E
# wal_s3_bucket: ""
# GCS bucket to use for shipping WAL segments with WAL-E
# wal_gs_bucket: ""
# Azure Storage Account to use for shipping WAL segments with WAL-G
wal_az_storage_account: "azure-account-name"
# configure K8s cron job managed by the operator
configLogicalBackup:
# image for pods of the logical backup job (example runs pg_dumpall)
logical_backup_docker_image: "registry.opensource.zalan.do/acid/logical-backup:v1.8.0"
# path of google cloud service account json file
# logical_backup_google_application_credentials: ""
# prefix for the backup job name
logical_backup_job_prefix: "logical-backup-"
# storage provider - either "s3" or "gcs"
logical_backup_provider: "s3"
# S3 Access Key ID
logical_backup_s3_access_key_id: ""
# S3 bucket to store backup results
logical_backup_s3_bucket: "my-bucket-url"
# S3 region of bucket
logical_backup_s3_region: ""
# S3 endpoint url when not using AWS
logical_backup_s3_endpoint: ""
# S3 Secret Access Key
logical_backup_s3_secret_access_key: ""
# S3 server side encryption
logical_backup_s3_sse: "AES256"
# S3 retention time for stored backups for example "2 week" or "7 days"
logical_backup_s3_retention_time: ""
# backup schedule in the cron format
logical_backup_schedule: "30 00 * * *"
# automate creation of human users with teams API service
configTeamsApi:
# team_admin_role will have the rights to grant roles coming from PG manifests
enable_admin_role_for_users: true
# operator watches for PostgresTeam CRs to assign additional teams and members to clusters
enable_postgres_team_crd: false
# toogle to create additional superuser teams from PostgresTeam CRs
enable_postgres_team_crd_superusers: false
# toggle to automatically rename roles of former team members and deny LOGIN
enable_team_member_deprecation: false
# toggle to grant superuser to team members created from the Teams API
enable_team_superuser: false
# toggles usage of the Teams API by the operator
enable_teams_api: false
# should contain a URL to use for authentication (username and token)
# pam_configuration: https://info.example.com/oauth2/tokeninfo?access_token= uid realm=/employees
# operator will add all team member roles to this group and add a pg_hba line
pam_role_name: zalandos
# List of teams which members need the superuser role in each Postgres cluster
postgres_superuser_teams:
- postgres_superusers
# List of roles that cannot be overwritten by an application, team or infrastructure role
protected_role_names:
- admin
- cron_admin
# Suffix to add if members are removed from TeamsAPI or PostgresTeam CRD
role_deletion_suffix: "_deleted"
# role name to grant to team members created from the Teams API
team_admin_role: admin
# postgres config parameters to apply to each team member role
team_api_role_configuration:
log_statement: all
# URL of the Teams API service
# teams_api_url: http://fake-teams-api.default.svc.cluster.local
# configure connection pooler deployment created by the operator
configConnectionPooler:
# db schema to install lookup function into
connection_pooler_schema: "pooler"
# db user for pooler to use
connection_pooler_user: "pooler"
# docker image
connection_pooler_image: "registry.opensource.zalan.do/acid/pgbouncer:master-22"
# max db connections the pooler should hold
connection_pooler_max_db_connections: 60
# default pooling mode
connection_pooler_mode: "transaction"
# number of pooler instances
connection_pooler_number_of_instances: 2
# default resources
connection_pooler_default_cpu_request: 500m
connection_pooler_default_memory_request: 100Mi
connection_pooler_default_cpu_limit: "1"
connection_pooler_default_memory_limit: 100Mi
# Zalando's internal CDC stream feature
enableStreams: false
rbac:
# Specifies whether RBAC resources should be created
create: true
# Specifies whether ClusterRoles that are aggregated into the K8s default roles should be created. (https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings)
createAggregateClusterRoles: false
serviceAccount:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name:
podServiceAccount:
# The name of the ServiceAccount to be used by postgres cluster pods
# If not set a name is generated using the fullname template and "-pod" suffix
name: "postgres-pod"
# priority class for operator pod
priorityClassName: ""
# priority class for database pods
podPriorityClassName: ""
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 100m
memory: 250Mi
securityContext:
runAsUser: 1000
runAsNonRoot: true
readOnlyRootFilesystem: true
allowPrivilegeEscalation: true
# Affinity for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
affinity: {}
# Node labels for pod assignment
# Ref: https://kubernetes.io/docs/user-guide/node-selection/
nodeSelector: {}
# Tolerations for pod assignment
# Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
controllerID:
# Specifies whether a controller ID should be defined for the operator
# Note, all postgres manifest must then contain the following annotation to be found by this operator
# "acid.zalan.do/controller": <controller-ID-of-the-operator>
create: false
# The name of the controller ID to use.
# If not set and create is true, a name is generated using the fullname template
name:
The secret and config details are below:
#postgres/psql-wale-creds
apiVersion: v1
kind: Secret
metadata:
name: postgres-pod-secrets
namespace: postgres
data: #all base64 encoded
AZURE_STORAGE_ACCESS_KEY: 'ACCESS_KEY'
AZURE_STORAGE_ACCOUNT: 'azure-account'
CLONE_AZURE_STORAGE_ACCESS_KEY: 'ACCESS_KEY'
CLONE_AZURE_STORAGE_ACCOUNT: 'azure-account'
#postgres/pod-env-overrides
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-pod-config
namespace: postgres
data:
USE_WALG_BACKUP: "true"
BACKUP_SCHEDULE: "0 3,15 * * *" # Schedule a base backup at 3:00 and 15:00
BACKUP_NUM_TO_RETAIN: "180" # For 2 backups per day, keep 90 days of base backups
WALG_AZ_PREFIX: "azure://kubernetes-test/$(SCOPE)/$(PGVERSION)"
# For point in time recovery/restore
USE_WALG_RESTORE: true
CLONE_WALG_AZ_PREFIX: azure://kubernetes-test/$(CLONE_SCOPE)/$(PGVERSION)
CLONE_USE_WALG_BACKUP: true
CLONE_USE_WALG_RESTORE: true
I am using blob-csi-driver for connecting to azure blob storage. The setup works fine with other containers. It is erroring out only with the postgres container. Can anyone help me in solving the issue?

Stop filebeat after ingesting all the logs

I have observed that filebeat runs forever after ingestion of all the logs.
Is there any way through which filebeat will auto stop after the all logs are ingested?
Is the configuration below is correct or not ?
filebeat.prospectors:
shutdown_timeout: 0s
enabled: true
paths:
- D:\new.log
output.logstash:
hosts: "localhost:5044"
I do not find anything in the logstash documentation to help me on this question.
I would suggest you to use client_inactivity_timeout => "30" in the input section of logstash.conf file.
hope this helps.
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html#plugins-inputs-beats-client_inactivity_timeout

How to send node.js logs to Cloudwatch Logs from Elastic Beanstalk Docker application?

Amazon offers these readymade files for sending Tomcat/Apache/nginx logs to Cloudwatch Logs, which work great.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html
However for my purposes they only send nginx logs, which isn't really sufficient and unfortunately they also provide zero documentation on the file format. What I'm trying to achieve is to send node.js logs from my Docker application to Cloudwatch (since autoscaling makes instances come and go).
So having files like /var/log/eb-docker/containers/eb-current-app/add839a3b599-stdouterr.log to appear in Cloudwatch.
So, what I have tried so far is adapt the webrequests config from the link above:
##############################################################################
## Sends docker logs to CloudWatch Logs
##############################################################################
Mappings:
CWLogs:
ApplicationLogGroup:
LogFile: "/var/log/eb-docker/containers/eb-current-app/*-stdouterr.log"
TimestampFormat: "%Y-%m-%d %H:%M:%S"
Outputs:
ApplicationLogGroup:
Description: "The name of the Cloudwatch Logs Log Group created for this environments web server access logs. You can specify this by setting the value for the environment variable: WebRequestCWLogGroup. Please note: if you update this value, then you will need to go and clear out the old cloudwatch logs group and delete it through Cloudwatch Logs."
Value: { "Ref" : "AWSEBCloudWatchLogs8832c8d3f1a54c238a40e36f31ef55a0ApplicationLogGroup"}
Resources :
AWSEBCloudWatchLogs8832c8d3f1a54c238a40e36f31ef55a0ApplicationLogGroup: ## Must have prefix: AWSEBCloudWatchLogs8832c8d3f1a54c238a40e36f31ef55a0
Type: "AWS::Logs::LogGroup"
DependsOn: AWSEBBeanstalkMetadata
DeletionPolicy: Retain ## this is required
Properties:
LogGroupName:
"Fn::GetOptionSetting":
Namespace: "aws:elasticbeanstalk:application:environment"
OptionName: ApplicationLogGroup
DefaultValue: {"Fn::Join":["-", [{ "Ref":"AWSEBEnvironmentName" }, "stdouterr"]]}
RetentionInDays: 14
## Register the files/log groups for monitoring
AWSEBAutoScalingGroup:
Metadata:
"AWS::CloudFormation::Init":
CWLogsAgentConfigSetup:
files:
## any .conf file put into /tmp/cwlogs/conf.d will be added to the cwlogs config (see cwl-agent.config)
"/tmp/cwlogs/conf.d/stdouterr.conf":
content : |
[stdouterr]
file = `{"Fn::FindInMap":["CWLogs", "ApplicationLogGroup", "LogFile"]}`
log_group_name = `{ "Ref" : "AWSEBCloudWatchLogs8832c8d3f1a54c238a40e36f31ef55a0ApplicationLogGroup" }`
log_stream_name = {instance_id}
datetime_format = `{"Fn::FindInMap":["CWLogs", "ApplicationLogGroup", "TimestampFormat"]}`
mode : "000400"
owner : root
group : root
Unfortunately this doesn't seem to work. :/
Also, does anyone have any idea if logs appear at all if fe. the timestamp format is wrong? Specially important since by default exceptions don't really have timestamps, so the actual errors would just disappear.
My application log lines currently look like this:
2016-07-05 09:11:31 ::1 - GET / 200 (5.107 ms)
You can use this link to setup cloudwatch agents on your beanstalk instances (if you haven't already) - http://serebrov.github.io/html/2015-05-20-cloudwatch-setup.html.
Next - try to send the files in /var/lib/docker/containers//.json to collect your docker logs. It's where the containers stdout and stderr is written to.

Configuring filebeat to get delta data from log file

I am using filebeat to stream log data to logstash but whenever I append new lines to the log file filebeat sends the whole data to logstash. Now I want if new data(delta data) is added to my log file only that data be streamed to the logstash. Is there some configuration which I need to take care of in filebeat.yml. My current filebeat config looks like-
filebeat:
prospectors:
-
paths:
- /Users/yogi/dev-tools/elastic_search/access_log/*.log
input_type: log
document_type: app_log
ignore_older: 10m
scan_frequency: 10s
output:
logstash:
hosts: ["localhost:5000"]
bulk_max_size: 1024
console:
pretty: true
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 10MB

Hello, I am using ubuntu 14.04 system and installed logstash 2.2.0 on it. When starting the logstash filebeat getting the following error:

sudo service filebeat start
Loading config file error: YAML config parsing failed on /etc/filebeat/filebeat.yml: yaml: line 14: found character that cannot start any token. Exiting.
I formatted the YAML that you provided in your comment:
filebeat:
# List of prospectors to fetch data.
prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
# Paths that should be crawled and fetched. Glob based paths.
# To fetch all ".log" files from a specific level of subdirectories
# /var/log/*/*.log can be used.
# For each file found under this path, a harvester is started.
# Make sure not file is defined twice as this can lead to unexpected behaviour.
paths:
- /var/log/auth.log
- /var/log/syslog
#- /var/log/*.log
The corresponding configuration without comments is:
filebeat:
prospectors:
-
paths:
- /var/log/auth.log
- /var/log/syslog
Try the cleaned up configuration. I guess you have a problem with forbidden characters. Please keep in mind that tabs are not allowed in YAML. Do you happen to have a tab or another forbidden character in line 14?
For further information take a look at the Filebeat Configuration Options.

Resources