Samba4 add additional domain controller - dns

I am trying to add an additional domain controller to my current domain set up on a Synology NAS. Following various documentations (including standard samba doc) I am stuck with the following problem:
calling following command:
sudo samba-tool domain join home.intern DC --option="dsdb:schema update allowed = yes"
Results in the following output:
INFO 2022-07-02 15:57:30,583 pid:17493 /usr/lib/python3/dist-packages/samba/join.py #107: Finding a writeable DC for domain 'home.intern'
INFO 2022-07-02 15:57:30,603 pid:17493 /usr/lib/python3/dist-packages/samba/join.py #109: Found DC nas.home.intern
INFO 2022-07-02 15:57:30,836 pid:17493 /usr/lib/python3/dist-packages/samba/join.py #1543: workgroup is HOME
INFO 2022-07-02 15:57:30,836 pid:17493 /usr/lib/python3/dist-packages/samba/join.py #1546: realm is home.intern
Adding CN=DC1,OU=Domain Controllers,DC=home,DC=intern
Adding CN=DC1,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=home,DC=intern
Adding CN=NTDS Settings,CN=DC1,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=home,DC=intern
Adding SPNs to CN=DC1,OU=Domain Controllers,DC=home,DC=intern
Setting account password for DC1$
Enabling account
Calling bare provision
INFO 2022-07-02 15:57:47,388 pid:17493 /usr/lib/python3/dist-packages/samba/provision/__init__.py #2122: Looking up IPv4 addresses
INFO 2022-07-02 15:57:47,391 pid:17493 /usr/lib/python3/dist-packages/samba/provision/__init__.py #2139: Looking up IPv6 addresses
WARNING 2022-07-02 15:57:47,393 pid:17493 /usr/lib/python3/dist-packages/samba/provision/__init__.py #2146: No IPv6 address will be assigned
INFO 2022-07-02 15:57:48,950 pid:17493 /usr/lib/python3/dist-packages/samba/provision/__init__.py #2294: Setting up secrets.ldb
INFO 2022-07-02 15:57:48,979 pid:17493 /usr/lib/python3/dist-packages/samba/provision/__init__.py #2299: Setting up the registry
INFO 2022-07-02 15:57:49,007 pid:17493 /usr/lib/python3/dist-packages/samba/provision/__init__.py #2302: Setting up the privileges database
INFO 2022-07-02 15:57:49,063 pid:17493 /usr/lib/python3/dist-packages/samba/provision/__init__.py #2305: Setting up idmap db
INFO 2022-07-02 15:57:49,100 pid:17493 /usr/lib/python3/dist-packages/samba/provision/__init__.py #2312: Setting up SAM db
INFO 2022-07-02 15:57:49,115 pid:17493 /usr/lib/python3/dist-packages/samba/provision/__init__.py #897: Setting up sam.ldb partitions and settings
INFO 2022-07-02 15:57:49,118 pid:17493 /usr/lib/python3/dist-packages/samba/provision/__init__.py #909: Setting up sam.ldb rootDSE
INFO 2022-07-02 15:57:49,126 pid:17493 /usr/lib/python3/dist-packages/samba/provision/__init__.py #1322: Pre-loading the Samba 4 and AD schema
Unable to determine the DomainSID, can not enforce uniqueness constraint on local domainSIDs
INFO 2022-07-02 15:57:49,291 pid:17493 /usr/lib/python3/dist-packages/samba/provision/__init__.py #2364: A Kerberos configuration suitable for Samba AD has been generated at /var/lib/samba/private/krb5.conf
INFO 2022-07-02 15:57:49,292 pid:17493 /usr/lib/python3/dist-packages/samba/provision/__init__.py #2366: Merge the contents of this file with your system krb5.conf or replace it with this one. Do not create a symlink
Provision OK for domain DN DC=home,DC=intern
Starting replication
Schema-DN[CN=Schema,CN=Configuration,DC=home,DC=intern] objects[402/1573] linked_values[0/0]
Schema-DN[CN=Schema,CN=Configuration,DC=home,DC=intern] objects[804/1573] linked_values[0/0]
Schema-DN[CN=Schema,CN=Configuration,DC=home,DC=intern] objects[1206/1573] linked_values[0/0]
Schema-DN[CN=Schema,CN=Configuration,DC=home,DC=intern] objects[1573/1573] linked_values[0/0]
Analyze and apply schema objects
schema_data_modify: we are not master: reject modify request
Failed to commit objects: WERR_GEN_FAILURE
Join failed - cleaning up
Deleted CN=DC1,OU=Domain Controllers,DC=home,DC=intern
Deleted CN=NTDS Settings,CN=DC1,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=home,DC=intern
Deleted CN=DC1,CN=Servers,CN=Default-First-Site-Name,CN=Sites,CN=Configuration,DC=home,DC=intern
ERROR(runtime): uncaught exception - (31, "Failed to process 'chunk' of DRS replicated objects: WERR_GEN_FAILURE")
File "/usr/lib/python3/dist-packages/samba/netcmd/__init__.py", line 186, in _run
return self.run(*args, **kwargs)
File "/usr/lib/python3/dist-packages/samba/netcmd/domain.py", line 661, in run
join_DC(logger=logger, server=server, creds=creds, lp=lp, domain=domain,
File "/usr/lib/python3/dist-packages/samba/join.py", line 1559, in join_DC
ctx.do_join()
File "/usr/lib/python3/dist-packages/samba/join.py", line 1449, in do_join
ctx.join_replicate()
File "/usr/lib/python3/dist-packages/samba/join.py", line 980, in join_replicate
repl.replicate(ctx.schema_dn, source_dsa_invocation_id,
File "/usr/lib/python3/dist-packages/samba/drs_utils.py", line 356, in replicate
raise e
File "/usr/lib/python3/dist-packages/samba/drs_utils.py", line 343, in replicate
self.process_chunk(level, ctr, schema, req_level, req, first_chunk)
File "/usr/lib/python3/dist-packages/samba/drs_utils.py", line 236, in process_chunk
self.net.replicate_chunk(self.replication_state, level, ctr,
Anybody knows why this error happens and what it means?
Thanks!

the problem were some tombstones on the main domain controller. The provisioning process tried to delete those which is not possible because of the fact that it is not the schema master. After taking care of the "invalid" entries on the main dc the provisioning ran fine.
If somebody else faces issues with the provisioning process it is always a good idea to add "-d 3" parameter, which raises the debug level and gives much more (important) output.
Bye

Related

Transport Layer Security Elasticsearch configuration

Note : My version of Elasticsearch is 7.15.0
I'm new to Elasticsearch , I'm trying to use Kibana alerts , to do that I must create a Rule and a Connector but when I've selected that field I've been got informed to enable Transport Layer Security and API keys to do so I followed the Elastic Transport Layer Security guide instructions where the instructor describe these steps :
Encrypt inter-node communications with Transport Layer Security :
1. Open the $ES_PATH_CONF/elasticsearch.yml file and make the following changes:
a. Add the cluster-name setting and enter a name for your cluster:
cluster.name: my-cluster
b. Add the node.name setting and enter a name for the node. The node name defaults to the host-name of the machine when Elasticsearch starts.
node.name: node-1
c. Add the following settings to enable inter-node communication and provide access to the node’s certificate.
Because you are using the same elastic-certificates.p12 file on every node in your cluster, set the verification mode to certificate:
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
Since the elastic-certificates is not generated automatically during the installation of the Software it must be generated by the elasticsearch-certutil inside the /usr/share/elasticsearch/bin directory :
a. First :
cd /usr/share/elasticsearch/bin
b. run the elastic-certutil to generate the elastic-stack-ca.zip certificate file :
bin/elasticsearch-certutil ca
c. unzip the file to exract the all information and move them to the /etc/elasticsearch directory .
unzip elastic-stack-ca.zip
Now the problem occurs when starting the elasticsearch service :
sudo service elasticsearch restart
Job for elasticsearch.service failed because the control process exited with error code. See "systemctl status elasticsearch.service" and "journalctl -xe" for details.
I tried to see where the error is located by running these two control commands but I did not understand .
Have you checked permissions and owners on the files? Permissions should be at 640 for the files. The owner/group should be root:elasticsearch.

New authority Jhipster

I followed those instructions from the article https://www.jhipster.tech/tips/025_tip_create_new_authority.html and I added a new authority "ROLE_CLIENT" After that, I restarted my project, An error has appeared :
2021-07-13 00:22:46.592 WARN 3596 --- [ restartedMain] iguration$LoadBalancerCaffeineWarnLogger : Spring Cloud LoadBalancer is currently working with the default cache. You can switch to using Caffeine cache, by adding it and org.springframework.cache.caffeine.CaffeineCacheManager to the classpath.
2021-07-13 00:22:46.806 ERROR 3596 --- [ gateway-task-1] t.j.c.liquibase.AsyncSpringLiquibase : Liquibase could not start correctly, your database is NOT ready: Validation Failed:
1 change sets check sum
config/liquibase/changelog/00000000000000_initial_schema.xml::00000000000001::jhipster was: 8:06225dfc05215e6b13d8a4febd3fd90f but is now: 8:2272077bd3e9baf389312f0e018e5795
liquibase.exception.ValidationFailedException: Validation Failed:
1 change sets check sum
config/liquibase/changelog/00000000000000_initial_schema.xml::00000000000001::jhipster was: 8:06225dfc05215e6b13d8a4febd3fd90f but is now: 8:2272077bd3e9baf389312f0e018e5795
at liquibase.changelog.DatabaseChangeLog.validate(DatabaseChangeLog.java:299)
at liquibase.Liquibase.lambda$update$1(Liquibase.java:237)
at liquibase.Scope.lambda$child$0(Scope.java:160)
at liquibase.Scope.child(Scope.java:169)
at liquibase.Scope.child(Scope.java:159)
at liquibase.Scope.child(Scope.java:138)
at liquibase.Liquibase.runInScope(Liquibase.java:2370)
at liquibase.Liquibase.update(Liquibase.java:217)
at liquibase.Liquibase.update(Liquibase.java:203)
at liquibase.integration.spring.SpringLiquibase.performUpdate(SpringLiquibase.java:321)
at liquibase.integration.spring.SpringLiquibase.afterPropertiesSet(SpringLiquibase.java:275)
at org.springframework.boot.autoconfigure.liquibase.DataSourceClosingSpringLiquibase.afterPropertiesSet(DataSourceClosingSpringLiquibase.java:46)
at tech.jhipster.config.liquibase.AsyncSpringLiquibase.initDb(AsyncSpringLiquibase.java:118)
at tech.jhipster.config.liquibase.AsyncSpringLiquibase.lambda$afterPropertiesSet$0(AsyncSpringLiquibase.java:93)
at tech.jhipster.async.ExceptionHandlingAsyncTaskExecutor.lambda$createWrappedRunnable$1(ExceptionHandlingAsyncTaskExecutor.java:78)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
By the way, I am using microservices architecture and JWT authentication.
This is normal: a Liquibase changeset is supposed to be immutable.
This is why Liquibase records md5 checksum with each changeset entry in the database changelog table to detect differences between what is currently in the changelog and what was actually ran against the database.
When you modified authority.csv, the checksum of the changeset changed and Liquibase rightly complained.
So, you have 3 alternative solutions:
Create a separate changeset to insert only your new authority (preferred way in production)
Clear the checksum column of your changeset in your use a db client like DBeaver to connect to your db and delete the MD5SUM column for your changelog row in DATABASECHANGELOG table. Look at https://docs.liquibase.com/concepts/basic/databasechangelog-table.html
Drop your database to restart from scratch, this could work for a dev database if you don't care about yourdata
Final advice: learn more about Liquibase from official docs also from JHipster doc

Avoid Google Dataproc logging

I'm performing millions of operations using Google Dataproc with one problem, the logging data size.
I do not perform any show or any other kind of print, but the 7 lines of INFO, multiplied by millions gets a really big logging size.
Is there any way to avoid Google Dataproc from logging?
Already tried without success in Dataproc:
https://cloud.google.com/dataproc/docs/guides/driver-output#configuring_logging
These are the 7 lines I want to get rid off:
18/07/30 13:11:54 INFO org.spark_project.jetty.util.log: Logging initialized #...
18/07/30 13:11:55 INFO org.spark_project.jetty.server.Server: ....z-SNAPSHOT
18/07/30 13:11:55 INFO org.spark_project.jetty.server.Server: Started #...
18/07/30 13:11:55 INFO org.spark_project.jetty.server.AbstractConnector: Started ServerConnector#...
18/07/30 13:11:56 INFO com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase: GHFS version: ...
18/07/30 13:11:57 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at ...
18/07/30 13:12:01 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Submitted application application_...
What you are looking for is an exclusion filter: you need to browse from your Console to Stackdriver Logging > Logs ingestion > Exclusions and click on "Create exclusion". As explained there:
To create a logs exclusion, edit the filter on the left to only match
logs that you do not want to be included in Stackdriver Logging. After
an exclusion has been created, matched logs will no longer be
accessible in Stackdriver Logging.
In your case, the filter should be something like this:
resource.type="cloud_dataproc_cluster"
textPayload:"INFO org.spark_project.jetty.util.log: Logging initialized"
...

How to monitor multiple devices with fw1-loggrabber

I am currently working on a logging system where i need to pull logs out of Checkpoint devices.
I use fw1-loggrabber with OPSEC LEA, and I successfully pulled logs from a Checkpoint firewall.
Now let's say i have 100 devices.
do I need to configure and run fw1-loggrabber 100 times or can I use one lea.conf and fw1-loggrabber.conf to configure all the devices I want to monitor and run it?
My currently configured files:
lea.conf:
lea_server auth_type sslca
lea_server ip 255.255.255.255
lea_server auth_port 18184
lea_server port 18184
opsec_sic_name "CN=Test,O=test..hi7arv"
lea_server opsec_entity_sic_name "cn=tt_mgmt,o=test..hi7arv"
opsec_sslca_file /opt/pkg_rel/p12_cert_file
fw1-loggrabber.conf
DEBUG_LEVEL="0"
FW1_LOGFILE="fw.log"
FW1_OUTPUT="logs"
FW1_TYPE="ng"
FW1_MODE="normal"
ONLINE_MODE="yes"
SHOW_FIELDNAMES="yes"
DATEFORMAT="std"
SYSLOG_FACILITY="LOCAL1"
RESOLVE_MODE="no"
RECORD_SEPARATOR="|"
LOGGING_CONFIGURATION=file
OUTPUT_FILE_PREFIX="/var/log/testFolder/Checkpoint/fw1"
OUTPUT_FILE_ROTATESIZE=1048576
If not possible to configure and run all from one configuration file (or two), any alternatives for pulling logs using Checkpoint OPSEC LEA?
Thanks.
When you run the fw1-loggrabber simply run it with as many lea.conf configs as you like - it will run on as many devices as you want.
Example:
/usr/local/fw1-loggrabber/bin/fw1-loggrabber
-c /usr/local/fw1-loggrabber/fw1-loggrabber.conf
-l /usr/local/fw1-loggrabber/lea1.conf
-l /usr/local/fw1-loggrabber/lea2.conf

Build Cassandra Cluster

I need to build a Cassandra cluster for my company, I use apache-cassandra-2.1.12-bin.tar.gz downloaded form official website.
I have three machines:
192.168.0.210;
192.168.0.209;
192.168.0.208;
I changed the cassandra.yaml for each one.
Step1: On 192.168.0.210:
listen_address: 192.168.0.210
seeds: 192.168.0.210
Step2: On 192.168.0.209:
listen_address: 192.168.0.209
seeds: 192.168.0.210
Step3: On 192.168.0.208:
listen_address: 192.168.0.208
seeds: 192.168.0.210
I searched online, some people also changed rpc_address, while some people not. When I changed rpc_address to 0.0.0.0, then run ./cassandra ,it shows:
Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException:
If rpc_address is set to a wildcard address (0.0.0.0), then you must set
broadcast_rpc_address to a value other than 0.0.0.0
so I changed broadcast_rpc_address to 1.2.3.4, then run ./cassandra, it shows
ERROR 05:49:42 Fatal configuration error
org.apache.cassandra.exceptions.ConfigurationException: Invalid yaml
at org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:120) ~[apache-cassandra-2.1.12.jar:2.1.12]
at org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:84) ~[apache-cassandra-2.1.12.jar:2.1.12]
at org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:161) ~[apache-cassandra-2.1.12.jar:2.1.12]
at org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:136) ~[apache-cassandra-2.1.12.jar:2.1.12]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:168) [apache-cassandra-2.1.12.jar:2.1.12]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:562) [apache-cassandra-2.1.12.jar:2.1.12]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:651) [apache-cassandra-2.1.12.jar:2.1.12]
Caused by: org.yaml.snakeyaml.parser.ParserException: while parsing a block mapping; expected <block end>, but found BlockMappingStart; in 'reader', line 455, column 2:
broadcast_rpc_address: 1.2.3.4
^
at org.yaml.snakeyaml.parser.ParserImpl$ParseBlockMappingKey.produce(ParserImpl.java:570) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:158) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.parser.ParserImpl.checkEvent(ParserImpl.java:143) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:230) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:159) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.composer.Composer.composeDocument(Composer.java:122) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:105) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:120) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:481) ~[snakeyaml-1.11.jar:na]
at org.yaml.snakeyaml.Yaml.load(Yaml.java:412) ~[snakeyaml-1.11.jar:na]
at org.apache.cassandra.config.YamlConfigurationLoader.logConfig(YamlConfigurationLoader.java:126) ~[apache-cassandra-2.1.12.jar:2.1.12]
at org.apache.cassandra.config.YamlConfigurationLoader.loadConfig(YamlConfigurationLoader.java:104) ~[apache-cassandra-2.1.12.jar:2.1.12]
... 6 common frames omitted
Invalid yaml
Fatal configuration error; unable to start. See log for stacktrace.
So my questions:
1.do I need to change rpc_address(some people do,while some not)?
2. if yes, how to handle broadcast_rpc_address?
3. except rpc_address/broadcast_rpc_address, what else do I need to do for building the cassandra cluster?
rpc_address is the address or interface to bind the Thrift RPC service and native transport server to. You could leave it blank. Cassandra will use the node's hostname. It is not suggested to set as 0.0.0.0, unless the node is protected by such as firewall. Or else, anyone could access Cassandra.
broadcast_rpc_address is the RPC address to broadcast to drivers and other Cassandra nodes. This cannot be set to 0.0.0.0. The drivers need the valid IP address to send the requests. If you set rpc_address to 0.0.0.0, you should set broadcast_rpc_address to the node's IP. In your example, 192.168.0.208, 192.168.0.209, or 192.168.0.210.
For 3, you just need to set cluster name to be the same on all nodes.
for rpc_address, try using:
rpc_address: localhost
Here is answers to your questions:
1.do I need to change rpc_address(some people do,while some not)?
NO, you dont need it unless you want your clients to connect to a different IP address rather than the actual IP address of the server, example would be SQL server Alias etc.
2. if yes, how to handle broadcast_rpc_address?
broadcast... i think it would be public IPs as the broadcast_address, or 0.0.0.0
except rpc_address/broadcast_rpc_address, what else do I need to do for building the cassandra cluster? make sure the Cluster name is the same for all nodes, and that for your first node setting up the cluster the first time, seed is the same as the listen IP, Then the second node the seed is the first node etc.

Resources