I'm trying to stay sane while configuring Bacula Server on my virtual CentOS Linux release 7.3.1611 to do a basic local backup job.
I prepared all the configurations I found necessary in the conf-files and prepared the mysql database accordingly.
When I want to start a job (local backup for now) I enter the following commands in bconsole:
*Connecting to Director 127.0.0.1:9101
1000 OK: bacula-dir Version: 5.2.13 (19 February 2013)
Enter a period to cancel a command.
*label
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
Automatically selected Storage: File
Enter new Volume name: MyVolume
Defined Pools:
1: Default
2: File
3: Scratch
Select the Pool (1-3): 2
This returns
Connecting to Storage daemon File at 127.0.0.1:9101 ...
Failed to connect to Storage daemon.
Do not forget to mount the drive!!!
You have messages.
where the message is:
12-Sep 12:05 bacula-dir JobId 0: Fatal error: authenticate.c:120 Director unable to authenticate with Storage daemon at "127.0.0.1:9101". Possible causes:
Passwords or names not the same or
Maximum Concurrent Jobs exceeded on the SD or
SD networking messed up (restart daemon).
Please see http://www.bacula.org/en/rel-manual/Bacula_Freque_Asked_Questi.html#SECTION00260000000000000000 for help.
I double and triple checked all the conf files for integrity and names and passwords. I don't know where to further look for the error.
I will gladly post any parts of the conf files but don't want to blow up this question right away if it might not be necessary. Thank you for any hints.
It might help someone sometime who made the same mistake as I:
After looking through manual page after manual page I found it was my own mistake. I had (for a reason I don't precisely recall, I guess to trouble-shoot another issue before) set all ports to 9101 - for the director, the file-daemon and the storage daemon.
So I assume the bacula components must have blocked each other's communication on port 9101. After resetting the default ports like 9102, 9103 according to the manual, it worked and I can now backup locally.
You have to add director's name from the backup server, edit /etc/bacula/bacula-fd.conf on remote client, see "List Directors who are permitted to contact this File daemon":
Director {
Name = BackupServerName-dir
Password = "use *-dir password from the same file"
}
Related
I have created a Cassandra database in DataStax Astra and am trying to load a CSV file using DSBulk in Windows. However, when I run the dsbulk load command, the operation never completes or fails. I receive no error message at all, and I have to manually terminate the operation after several minutes. I have tried to wait it out, and have let the operation run for 30 minutes or more with no success.
I know that a free tier of Astra might run slower, but wouldn't I see at least some indication that it is attempting to load data, even if slowly?
When I run the command, this is the output that is displayed and nothing further:
C:\Users\JT\Desktop\dsbulk-1.8.0\bin>dsbulk load -url test1.csv -k my_keyspace -t test_table -b "secure-connect-path.zip" -u my_user -p my_password -header true
Username and password provided but auth provider not specified, inferring PlainTextAuthProvider
A cloud secure connect bundle was provided: ignoring all explicit contact points.
A cloud secure connect bundle was provided and selected operation performs writes: changing default consistency level to LOCAL_QUORUM.
Operation directory: C:\Users\JT\Desktop\dsbulk-1.8.0\bin\logs\LOAD_20210407-143635-875000
I know that DataStax recently changed Astra so that you need credentials from a generated Token to connect DSBulk, but I have a classic DB instance that won't accept those token credentials when entered in the dsbulk load command. So, I use my regular user/password.
When I check the DSBulk logs, the only text is the same output displayed in the console, which I have shown in the code block above.
If it means anything, I have the exact same issue when trying to run dsbulk Count operation.
I have the most recent JDK and have set both the JAVA_HOME and PATH variables.
I have also tried adding dsbulk/bin directory to my PATH variable and had no success with that either.
Do I need to adjust any settings in my Astra instance?
Lastly, is it possible that my basic laptop is simply not powerful enough for this operation or just running the operation crazy slow?
Any ideas or help is much appreciated!
I'm using GCP VM instance for running my python script as back ground process.
But I found my script got SIGTERM.
I check the syslog and daemon.log in /var/log
and I found my python script (2316) was terminated by system.
What do I need to check VM settings?
Judging from this log line in your screenshot:
Nov 12 18:23:10 ai-task-1 systemd-logind[1051]: Power key pressed.
I would say that your script's process was SIGTERMed as a result of the hypervisor gracefully shutting down the VM, which would happen when a GCP user or service account with admin access to the project performs a GCE compute.instances.stop request.
You can look for this request's logs for more details on where it comes from in the Logs Viewer/Explorer or gcloud logging read --freshness=30d (man) with some filters like:
resource.type="gce_instance"
"ai-task-1"
timestamp>="2020-11-12T18:22:40Z"
timestamp<="2020-11-12T18:23:40Z"
Though depending on the retention period for your _Default bucket (30 days by default), these logs may have already expired.
I want to use Ansible to automate my deployment process. Let me say few words about it. Deployment process in my case consists of two steps:
update DB (SQL Script)
copy predefined set of files to various network folders (on different machines)
For this purpose I use special selfwritten program called Installer.exe. If I run it myself it performes operations with my credentials. So it has all my rights, e.g. access to network folders and SQL Databese.
I want to use Ansible as wrapper for my program (Installer.exe), not instead of it. My target scenario - Ansible prepares configuration files and runs my installer on remote windows machine. I've faced a problem - my program run by Ansible hasn't my full rights. It can successfully access SQL Database 1 on the same machine, but can't access SQL Database 2 on remote machine or access network folder. I always get "access denied" on networks access, SQL Database says something about NT AUTHORITY\ANONYMOUS LOGON. It looks like double hop problem, but not exactly it as far as I understand it. Double hop is about service accounts, but I am trying to access remote server with my own personal accouns.
UPD 1:
My variables for that group are:
ansible_user: qtros#ABC.RU
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
ansible_winrm_operation_timeout_sec: 120
ansible_winrm_read_timeout_sec: 150
ansible_winrm_transport: kerberos
ansible_winrm_kerberos_delegation: yes
Before any actions with Ansible I run the following command:
$> kinit qtros#ABC.RU
and enter my password. Later if run klist I can see some valid tickets. I intended to use domain account, but not local system account. Am I doing it right?
UPD 2: if I add such command in playbook:
...
raw: "klist"
...
I get something like:
fatal: [targetserver.abc.ru]: FAILED! => {"changed": true, "failed": true, "rc": 1, "stderr": "", "stdout": "\r\nCurrent LogonId is 0:0x20265db4\r\nError calling API LsaCallAuthenticationPackage (ShowTickets substatus): 1312\r\n\r\nklist failed with 0xc000005f/-1073741729: A specified logon session does not exist. It may already have been terminated.\r\n\r\n\r\n", "stdout_lines": ["", "Current LogonId is 0:0x20265db4", "Error calling API LsaCallAuthenticationPackage (ShowTickets substatus): 1312", "", "klist failed with 0xc000005f/-1073741729: A specified logon session does not exist. It may already have been terminated.", "", ""]}
Based on your problem statement, it sounds like the Windows machine is running installer.exe under the Local System account, which has no rights outside of the Windows machine itself and will always fail trying to run any procedure on SQL Database 2. This wouldn't be a Kerberos double-hop scenario. For one, there's only one hop between the Windows machine in the middle of the diagram running installer.exe and SQL Database 2. Since your Ansible program is wrapping up installer.exe inside of it, then unless I'm missing something, run the Ansible program on the Windows machine with AD domain credentials having the appropriate rights to SQL Database 2.
EDIT: As the focus of your question was based on resolving the SQL Database 2 message regarding NT AUTHORITY\ANONYMOUS LOGON, and whether or not this was a Kerberos double hop problem (doesn't look like it), that's what I answered on. Note you have ansible_user defined but not ansible_ssh_pass. There's an apparent bug in the documentation (http://docs.ansible.com/ansible/intro_windows.html), so use ansible_ssh_pass instead of ansible_ssh_password.
As a new user of SSH and the Amazon AWS EC2 Dashboard, I am trying to test to see whether I can, in one instance, save data onto a volume, then access that data from another instance by adding the volume to the instance (after terminating the first instance).
When I create the first instance, the AMI is "Amazon Linux AMI 2014.03.2 (HVM)" and the family is "general purpose" with EBS storage only. I automatically assign a public IP address to the instance. I configure the root volume so that it does NOT delete on termination.
As soon as the instance is launched, I open up PuTTY and set the host name to the instance's Public IP Address under Port 22, and authenticate using a private key saved onto the disc that I have already generated earlier.
Upon signing into the instance, I create a text file by typing the following code:
echo "testing">test.txt
I then confirm that the text "testing" is saved to the file "test.txt":
less test.txt
I see the text "testing", thus confirming that it is saved to the file. (I am assuming at this point that it is saved onto the volume, but I am not entirely sure.)
I then proceed to terminate the instance. I launch another one using the same AMI, same instance type, and a different public IP address. In addition to the root volume, I attempt to add the volume that was used as the root volume for the previous instance. (Oddly enough, the snapchat IDs for the previous volume and the root volume of the new instance are identical.) In addition, I use the same tag instance, the same security group and the same key pair as the previous instance.
I open up PuTTY again, this time using the Public IP Address of the new instance, but still using the same private key and port used for the previous instance. Opening logging in, I type:
less test.txt
but I am greeted with this message:
test.txt: No such file or directory
Is there any advice that anyone can offer me regarding this issue. Is it even possible to store a text file onto a volume? If so, am I performing this operation incorrectly?
As the secondary volume has the same UUID and the Amazon Linux used UUID based identification for root, then there might be a chance that the secondary volume was taken as the root volume. This may be the reason why there would be a mess up in choosing the root volume and the initial attempt to find test.txt would fail.
The reboot might have allowed it to take a different order which is why you were able to find it.
I need help connecting Azure database using SymmetricDS 3.5.1. I can't seem to the configuration correct. I get an error saying "Cannot create PoolableConnectionFactory" with the message either "socket closed" (when I don't specify ssl parameter) or "login timeout" (when I specify the ssl parameter). I have specified a timeout amount in the connection string, however, it does not seem to work and defaults to 30 seconds. Is there any documentation on how to connect to an Azure database using SymmetricDS? Anyway, take a look and tell me what I need to change in my engine.properties file? I have the following:
db.url=jdbc:jtds:sqlserver://MyServer.database.windows.net:1433;database=MyDatabase;user=MyUser#MyServer;password=MyPassowrd;encrypt=true;hostNameInCertificate=*.database.windows.net;loginTimeout=300;useCursors=true;bufferMaxMemory=10240;lobBuffer=5242880;ssl=require
db.user=MyUser#MyServer
db.database=MyDatabase
db.password=MyPassword
db.driver=net.sourceforge.jtds.jdbc.Driver
It turns out you have to use the Microsoft JDBC driver. I didn't see any documentation on how to set it up so for the sake of others this is what I did after reading http://www.symmetricds.org/docs/how-to/connect-to-database
Download the Microsoft jdbc driver
Put the sqljdbc4.jar file in the lib folder of your symmetric folder
Change the *.properties file to be the following connection information...
db.driver=com.microsoft.sqlserver.jdbc.SQLServerDriver
db.url=jdbc:sqlserver://{your_server_name}.database.windows.net:1433;database={database_name};user={user}#{your_server_name};password={password};encrypt=true;hostNameInCertificate=*.database.windows.net;loginTimeout=300;useCursors=true;bufferMaxMemory=10240;lobBuffer=5242880;