Currently all appender.json_console logs are written as stdout and logs are written into a docker container json log for logstash.
Is it possible to utiilse the log4j.properties configuration to write the json_console log to a file like other logstash log. Trying to achieve something like below.
appender.json_console.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
Current configuration:
appender.console.type = Console
appender.console.name = plain_console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %m%n
appender.json_console.type = Console
appender.json_console.name = json_console
appender.json_console.layout.type = JSONLayout
appender.json_console.layout.compact = true
appender.json_console.layout.eventEol = true
appender.rolling.type = RollingFile
appender.rolling.name = plain_rolling
appender.rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
appender.rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}.log
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.layout.type = PatternLayout
appender.rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] %-.10000m%n
appender.json_rolling.type = RollingFile
appender.json_rolling.name = json_rolling
appender.json_rolling.fileName = ${sys:ls.logs}/logstash-${sys:ls.log.format}.log
appender.json_rolling.filePattern = ${sys:ls.logs}/logstash-${sys:ls.log.format}-%d{yyyy-MM-dd}.log
appender.json_rolling.policies.type = Policies
appender.json_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.json_rolling.policies.time.interval = 1
appender.json_rolling.policies.time.modulate = true
appender.json_rolling.layout.type = JSONLayout
appender.json_rolling.layout.compact = true
appender.json_rolling.layout.eventEol = true
Just in case if someone is trying to achieve the same. The below fixed my issue
Had to comment out the below lines in logstash.conf
# stdout
# {
# codec => rubydebug
# }
Related
I have an AD server running on server 2019. I just setup a linux box and configured samba for some reason i can't get getent group "domain admins" to show anything. if i do getent passwd Administrator it does work, and wbinfo -u or wbinfo --domain-groups works fine as well.
SMB.CONF
[global]
server role = MEMBER SERVER
security = ADS
realm = TESTLAB.COM
workgroup = TEST
dedicated keytab file = /etc/krb5.keytab
kerberos method = secrets and keytab
server string = Samba 4 %h
log file = /var/log/samba/%m.log
log level = 5
idmap config * : backend = tdb
idmap config * : range = 10000-20000
idmap config TESTLAB : backend = rid
idmap config TESTLAB : range = 30000-40000
idmap config TESTLAB : backend = ad
password server = adsrv1.testlab.com
encrypt passwords = yes
winbind refresh tickets = yes
winbind offline logon = yes
winbind enum users = yes
winbind enum groups = yes
winbind nested groups = yes
winbind expand groups = 4
winbind use default domain = yes
#winbind normalize name = yes
os level = 20
domain master = no
local master = yes
preferred master = no
map to guest = bad user
host msdfs = no
netbios name = smbsrv
client min protocol = SMB2
client max protocol = SMB3
client ldap sasl wrapping = plain
hosts allow = 10.0.0.0/16
unix extensions = no
reset on zero vc = yes
veto files = /.bash_logout/.bash_profile/.bash_history/.bashrc/
hide unreadable = yes
acl group control = yes
acl map full control = true
ea support = yes
vfs objects = acl_xattr
store dos attributes = yes
#dos flemode = yes
dos filetimes = yes
enable privileges = yes
restrict anonymous = 2
strict allocate = yes
guest ok = no
load printers = no
printing = bsd
printcap name = /dev/null
disable spoolss = yes
username map = /etc/samba/user.map
template shell = /bin/bash
template homedir = /home/TESTLAB/%U
[Data]
comment = "User Data"
path = /mnt/data
create mask = 0770
browseable = yes
writable = yes
valid users = #"Domain Admins" #"Domain Users"
write list = #"Domain Admins" #"Domain Users"
NSSWITCH.CONF
passwd: compat files winbind sss
group: compat files windind sss
shadow: compat files sss
gshadow: files
hosts: files dns mdns4_minimal [NOTFOUND=return] mdns4
networks: files
protocols: db files
services: db files sss
ethers: db files
rpc: db files
netgroup: nis sss
sudoers: files sss
KRB5.conf
[libdefaults]
default_realm = testlab.com
dns_lookup_realm = false
dns_lookup_kdc = true
# The following krb5.conf variables are only for MIT Kerberos.
kdc_timesync = 1
ccache_type = 4
forwardable = true
proxiable = true
# The following libdefaults parameters are only for Heimdal Kerberos.
fcc-mit-ticketflags = true
[realms]
testlab.com = {
kdc = adsrv1.testlab.com
admin_server = adsrv1.testlab.com
}
ATHENA.MIT.EDU = {
kdc = kerberos.mit.edu
kdc = kerberos-1.mit.edu
kdc = kerberos-2.mit.edu:88
admin_server = kerberos.mit.edu
default_domain = mit.edu
}
ZONE.MIT.EDU = {
kdc = casio.mit.edu
kdc = seiko.mit.edu
admin_server = casio.mit.edu
}
CSAIL.MIT.EDU = {
admin_server = kerberos.csail.mit.edu
default_domain = csail.mit.edu
}
IHTFP.ORG = {
kdc = kerberos.ihtfp.org
admin_server = kerberos.ihtfp.org
}
1TS.ORG = {
kdc = kerberos.1ts.org
admin_server = kerberos.1ts.org
}
ANDREW.CMU.EDU = {
admin_server = kerberos.andrew.cmu.edu
default_domain = andrew.cmu.edu
}
CS.CMU.EDU = {
kdc = kerberos-1.srv.cs.cmu.edu
kdc = kerberos-2.srv.cs.cmu.edu
kdc = kerberos-3.srv.cs.cmu.edu
admin_server = kerberos.cs.cmu.edu
}
DEMENTIA.ORG = {
kdc = kerberos.dementix.org
kdc = kerberos2.dementix.org
admin_server = kerberos.dementix.org
}
stanford.edu = {
kdc = krb5auth1.stanford.edu
kdc = krb5auth2.stanford.edu
kdc = krb5auth3.stanford.edu
master_kdc = krb5auth1.stanford.edu
admin_server = krb5-admin.stanford.edu
default_domain = stanford.edu
}
UTORONTO.CA = {
kdc = kerberos1.utoronto.ca
kdc = kerberos2.utoronto.ca
kdc = kerberos3.utoronto.ca
admin_server = kerberos1.utoronto.ca
default_domain = utoronto.ca
}
[domain_realm]
.mit.edu = ATHENA.MIT.EDU
mit.edu = ATHENA.MIT.EDU
.media.mit.edu = MEDIA-LAB.MIT.EDU
media.mit.edu = MEDIA-LAB.MIT.EDU
.csail.mit.edu = CSAIL.MIT.EDU
csail.mit.edu = CSAIL.MIT.EDU
.whoi.edu = ATHENA.MIT.EDU
whoi.edu = ATHENA.MIT.EDU
.stanford.edu = stanford.edu
.slac.stanford.edu = SLAC.STANFORD.EDU
.toronto.edu = UTORONTO.CA
.utoronto.ca = UTORONTO.CA
.testlab.com = TESTLAB.COM
SSSD.CONF
[sssd]
config_file_version = 2
services = nss, pam
domains = TESTLAB
# SSSD will not start if you do not configure any domains.
# Add new domain configurations as [domain/<NAME>] sections, and
# then add the list of domains (in the order you want them to be
# queried) to the "domains" attribute below and uncomment it.
; domains = LDAP
[nss]
[pam]
# Example LDAP domain
[TESTLAB.COM/LDAP]
id_provider = ldap
auth_provider = ldap
# ldap_schema can be set to "rfc2307", which stores group member names in the
# "memberuid" attribute, or to "rfc2307bis", which stores group member DNs in
# the "member" attribute. If you do not know this value, ask your LDAP
# administrator.
ldap_schema = rfc2307
ldap_uri = ldap://ldap.testlab.com
ldap_search_base = dc=testlab,dc=com
# Note that enabling enumeration will have a moderate performance impact.
# Consequently, the default value for enumeration is FALSE.
# Refer to the sssd.conf man page for full details.
enumerate = true
# Allow offline logins by locally storing password hashes (default: false).
cache_credentials = true
# An example Active Directory domain. Please note that this configuration
# works for AD 2003R2 and AD 2008, because they use pretty much RFC2307bis
# compliant attribute names. To support UNIX clients with AD 2003 or older,
# you must install Microsoft Services For Unix and map LDAP attributes onto
# msSFU30* attribute names.
[domain/TESTLAB]
id_provider = ldap
auth_provider = krb5
chpass_provider = krb5
ldap_uri = ldap://ldap.testlab.com
ldap_search_base = dc=testlab,dc=com
ldap_schema = rfc2307bis
ldap_sasl_mech = GSSAPI
ldap_user_object_class = user
ldap_group_object_class = group
ldap_user_home_directory = unixHomeDirectory
ldap_user_principal = userPrincipalName
ldap_account_expire_policy = ad
ldap_force_upper_case_realm = true
krb5_server = 10.0.0.10
krb5_realm = TESTLAB.COM
It might help if you used the correct domain name on the 'idmap config' lines.
You have 'workgroup = TEST' and 'idmap config TESTLAB : backend = rid', they must match, change 'TESTLAB' to 'TEST'
Oh, and if your version of Samba is >= 4.8.0 , then remove sssd, you cannot use Samba >= 4.8.0 with sssd.
OK, I would remove these lines from your smb.conf:
server role = MEMBER SERVER
password server = adsrv1.testlab.com
encrypt passwords = yes
winbind enum users = yes
winbind enum groups = yes
winbind nested groups = yes
netbios name = smbsrv
os level = 20
local master = yes
acl map full control = true
ea support = yes
dos filetimes = yes
enable privileges = yes
guest ok = no
browseable = yes
idmap config TESTLAB : backend = ad
create mask = 0770
valid users = #"Domain Admins" #"Domain Users"
write list = #"Domain Admins" #"Domain Users"
Then set the permissions on the share from Windows (security tab)
I would also upgrade Debian to Buster, this will get you a later version of Samba.
I am new to azure SubscriptionClient, I am trying to get the total message count from azure SubscriptionClient with python.
Please try something like the following:
from azure.servicebus import SubscriptionClient
conn_str = "Endpoint=sb://<service-bus-namespace-name>.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=access-key="
topic_name = "test"
subscription_name = "test"
client = SubscriptionClient.from_connection_string(conn_str, subscription_name, topic_name)
props = client.get_properties()
message_count = props['message_count']
print message_count
This worked for me:
from azure.servicebus.aio import ServiceBusClient
from azure.servicebus.management import ServiceBusAdministrationClient
number_of_messages_in_subscription = 0
CONNECTION_STR = "<your_connection_string>"
with ServiceBusAdministrationClient.from_connection_string(CONNECTION_STR) as servicebus_mgmt_client:
global number_of_messages_in_subscription
TOPIC_NAME = "<your_topic_name>"
SUBSCRIPTION_NAME = "<your_subscription_name>"
get_subscription_runtime_properties = servicebus_mgmt_client.get_subscription_runtime_properties(TOPIC_NAME, SUBSCRIPTION_NAME)
number_of_messages_in_subscription = get_subscription_runtime_properties.active_message_count
Source: https://github.com/Azure/azure-sdk-for-python/blob/1709ec7898c87e4369f5324302f274f254857dc3/sdk/servicebus/azure-servicebus/samples/async_samples/mgmt_subscription_async.py
listjobs.json?project=myproject is 404
but others can get,for example:listspiders.json?project=myproject...
curl http://localhost:6800/listjobs.json?project=myproject
I forget to write the configure:
[services]
schedule.json = scrapyd.webservice.Schedule
cancel.json = scrapyd.webservice.Cancel
addversion.json = scrapyd.webservice.AddVersion
listprojects.json = scrapyd.webservice.ListProjects
listversions.json = scrapyd.webservice.ListVersions
listspiders.json = scrapyd.webservice.ListSpiders
delproject.json = scrapyd.webservice.DeleteProject
delversion.json = scrapyd.webservice.DeleteVersion
listjobs.json = scrapyd.webservice.ListJobs
daemonstatus.json = scrapyd.webservice.DaemonStatus
I'm using Node JS to subscribe SNS on AWS and SQS to handle queues . How do I know if a file has been uploaded to S3 then sent the message to node js automatically via SNS? Sorry my English is not good
You can do that send notification using SNS when any file uploaded to s3
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
require 'aws-sdk-s3' # v2: require 'aws-sdk'
req = {}
req[:bucket] = bucket_name
events = ['s3:ObjectCreated:*']
notification_configuration = {}
# Add function
lc = {}
lc[:lambda_function_arn] = 'my-function-arn'
lc[:events] = events
lambda_configurations = []
lambda_configurations << lc
notification_configuration[:lambda_function_configurations] = lambda_configurations
# Add queue
qc = {}
qc[:queue_arn] = 'my-topic-arn'
qc[:events] = events
queue_configurations = []
queue_configurations << qc
notification_configuration[:queue_configurations] = queue_configurations
# Add topic
tc = {}
tc[:topic_arn] = 'my-topic-arn'
tc[:events] = events
topic_configurations = []
topic_configurations << tc
notification_configuration[:topic_configurations] = topic_configurations
req[:notification_configuration] = notification_configuration
req[:use_accelerate_endpoint] = false
s3 = Aws::S3::Client.new(region: 'us-west-2')
s3.put_bucket_notification_configuration(req)
you can refer this code also.
I installed python_beaker, added it into include:
session_factory = session_factory_from_settings(settings)
config = Configurator(settings=settings, session_factory=session_factory)
config.include('pyramid_beaker')
Added into development.ini these settings:
session.type = file
session.data_dir = /tmp/sessions/data
session.lock_dir = /tmp/sessions/lock
session.key = key
session.secret = secret
session.cookie_on_exception = true
But the session never ends. What am I doing wrong?