Ansible, windows - how to access network folder? - security

I want to use Ansible to automate my deployment process. Let me say few words about it. Deployment process in my case consists of two steps:
update DB (SQL Script)
copy predefined set of files to various network folders (on different machines)
For this purpose I use special selfwritten program called Installer.exe. If I run it myself it performes operations with my credentials. So it has all my rights, e.g. access to network folders and SQL Databese.
I want to use Ansible as wrapper for my program (Installer.exe), not instead of it. My target scenario - Ansible prepares configuration files and runs my installer on remote windows machine. I've faced a problem - my program run by Ansible hasn't my full rights. It can successfully access SQL Database 1 on the same machine, but can't access SQL Database 2 on remote machine or access network folder. I always get "access denied" on networks access, SQL Database says something about NT AUTHORITY\ANONYMOUS LOGON. It looks like double hop problem, but not exactly it as far as I understand it. Double hop is about service accounts, but I am trying to access remote server with my own personal accouns.
UPD 1:
My variables for that group are:
ansible_user: qtros#ABC.RU
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
ansible_winrm_operation_timeout_sec: 120
ansible_winrm_read_timeout_sec: 150
ansible_winrm_transport: kerberos
ansible_winrm_kerberos_delegation: yes
Before any actions with Ansible I run the following command:
$> kinit qtros#ABC.RU
and enter my password. Later if run klist I can see some valid tickets. I intended to use domain account, but not local system account. Am I doing it right?
UPD 2: if I add such command in playbook:
...
raw: "klist"
...
I get something like:
fatal: [targetserver.abc.ru]: FAILED! => {"changed": true, "failed": true, "rc": 1, "stderr": "", "stdout": "\r\nCurrent LogonId is 0:0x20265db4\r\nError calling API LsaCallAuthenticationPackage (ShowTickets substatus): 1312\r\n\r\nklist failed with 0xc000005f/-1073741729: A specified logon session does not exist. It may already have been terminated.\r\n\r\n\r\n", "stdout_lines": ["", "Current LogonId is 0:0x20265db4", "Error calling API LsaCallAuthenticationPackage (ShowTickets substatus): 1312", "", "klist failed with 0xc000005f/-1073741729: A specified logon session does not exist. It may already have been terminated.", "", ""]}

Based on your problem statement, it sounds like the Windows machine is running installer.exe under the Local System account, which has no rights outside of the Windows machine itself and will always fail trying to run any procedure on SQL Database 2. This wouldn't be a Kerberos double-hop scenario. For one, there's only one hop between the Windows machine in the middle of the diagram running installer.exe and SQL Database 2. Since your Ansible program is wrapping up installer.exe inside of it, then unless I'm missing something, run the Ansible program on the Windows machine with AD domain credentials having the appropriate rights to SQL Database 2.
EDIT: As the focus of your question was based on resolving the SQL Database 2 message regarding NT AUTHORITY\ANONYMOUS LOGON, and whether or not this was a Kerberos double hop problem (doesn't look like it), that's what I answered on. Note you have ansible_user defined but not ansible_ssh_pass. There's an apparent bug in the documentation (http://docs.ansible.com/ansible/intro_windows.html), so use ansible_ssh_pass instead of ansible_ssh_password.

Related

How to find the source of audit failure in Windows server?

I get audit failure messages in the security event logs, every second.
Event id: 4625
logon type: 3
Process name: lasass.exe
failed login: schOPSSH
The COPSSH (SSH for Windows) was installed on the machine and its user was svcOPSSH, not schOPSSH. So, I thought that the person who installed misconfigured it, I stopped the SSH service, even I removed the software and deleted all users except admin, but I still get login attempts with that username. I checked all services and its credentials, everything was okay. I search the registry with the user "schOPSSH", couldn't find any record.
Do you have any idea to find the source of this login attempt? Thanks.
I figured it out. In my case, there was brute force attack from external IP. I didn't expect that. I used the MS Network Monitor to check incoming requests.

Bacula - Director unable to authenticate with Storage daemon

I'm trying to stay sane while configuring Bacula Server on my virtual CentOS Linux release 7.3.1611 to do a basic local backup job.
I prepared all the configurations I found necessary in the conf-files and prepared the mysql database accordingly.
When I want to start a job (local backup for now) I enter the following commands in bconsole:
*Connecting to Director 127.0.0.1:9101
1000 OK: bacula-dir Version: 5.2.13 (19 February 2013)
Enter a period to cancel a command.
*label
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
Automatically selected Storage: File
Enter new Volume name: MyVolume
Defined Pools:
1: Default
2: File
3: Scratch
Select the Pool (1-3): 2
This returns
Connecting to Storage daemon File at 127.0.0.1:9101 ...
Failed to connect to Storage daemon.
Do not forget to mount the drive!!!
You have messages.
where the message is:
12-Sep 12:05 bacula-dir JobId 0: Fatal error: authenticate.c:120 Director unable to authenticate with Storage daemon at "127.0.0.1:9101". Possible causes:
Passwords or names not the same or
Maximum Concurrent Jobs exceeded on the SD or
SD networking messed up (restart daemon).
Please see http://www.bacula.org/en/rel-manual/Bacula_Freque_Asked_Questi.html#SECTION00260000000000000000 for help.
I double and triple checked all the conf files for integrity and names and passwords. I don't know where to further look for the error.
I will gladly post any parts of the conf files but don't want to blow up this question right away if it might not be necessary. Thank you for any hints.
It might help someone sometime who made the same mistake as I:
After looking through manual page after manual page I found it was my own mistake. I had (for a reason I don't precisely recall, I guess to trouble-shoot another issue before) set all ports to 9101 - for the director, the file-daemon and the storage daemon.
So I assume the bacula components must have blocked each other's communication on port 9101. After resetting the default ports like 9102, 9103 according to the manual, it worked and I can now backup locally.
You have to add director's name from the backup server, edit /etc/bacula/bacula-fd.conf on remote client, see "List Directors who are permitted to contact this File daemon":
Director {
Name = BackupServerName-dir
Password = "use *-dir password from the same file"
}

Provide root access but not DB access

Trying to give servers production access to more ops people in our team.
Only issue is the DB access concern. For most tasks ops do not need DB access and only limited people should have such access.
Let's say we have two servers:
Application Server:
tomcat (app needs access to DB server)
DB server:
Database
So ultimately we would like to give root access to the "application server" so that ops can do all sorts maintenance on the server but not be able to gain access to the DB server. This means I cannot just store DB pass in a configuration files for the app to read for example.
Are there well known practices that would solve issue like that?
First any credential that the 'Application Server' has to access the 'DB Server' should be considered handed over to anyone with root on the Application Server. Since you say that DB access must be limited you cannot give ops complete root on the Application Server.
But do not lose hope, there is sudo.
sudo can give users or groups access to root power, but only for limited purposes. Unfortunately setting up sudo correctly can be tricky to prevent subshells and wildcards from getting full root, but it is possible.
There are too many permutations for a general answer beyond sudo without additional information about your use case. A great reference for sudo with exactly this use case in mind is 'Sudo Mastery' by Michael W. Lucas.

Sql Server Data Tools Data Comparison fails when connecting to Azure

When using the Sql Server Data Tools Data Comparision tools a few of us here are unable to do comparisons when the source is an Azure database.
The error we get is below:
---------------------------
Microsoft Visual Studio
---------------------------
Data information could not be retrieved because of the following error:
Value cannot be null.
Parameter name: conn
Value cannot be null.
Parameter name: conn
The connection test works fine. I've tried creating a new connection. As a side note if I do data compare with a non-Azure source things work fine.
SQL Server Data tools version is 12.0.50512.0
We can access the server using SSMS without any problems.
It turned out to be a permissions issue but I was able to diagnose it using the details available at https://social.msdn.microsoft.com/Forums/sqlserver/en-US/740e3ed8-bb05-48f7-8ea6-721eca071198/publish-to-azure-db-v12-failing-value-cannot-be-null-parameter-name-conn?forum=ssdt
Gathering an Event Log for SSDT
Open a new command prompt as Administrator.
Run the following command
logman create trace -n DacFxDebug -p "Microsoft-SQLServerDataTools"
0x800 -o "%LOCALAPPDATA%\DacFxDebug.etl" -ets
logman create trace -n
SSDTDebug -p "Microsoft-SQLServerDataToolsVS" 0x800 -o
"%LOCALAPPDATA%\SSDTDebug.etl" -ets
Run whatever the target/issue scenario is in SSDT. Go back to the command prompt and run the following commands
logman stop DacFxDebug -ets
logman stop SSDTDebug -ets
The resulting ETL files will be located at %LOCALAPPDATA%\SSDTDebug.etl & %LOCALAPPDATA%\DacFxDebug.etl and can be navigated to using Windows Explorer.
There is no such limitation. Ref - https://msdn.microsoft.com/en-us/hh272693(v=vs.103).aspx
Check whether the Firewall Rule is open for this connection. If not, then add the current client IP to allowed IP addresses of that SQL Azure DB
I find that if I have already compared a local DB before hand (in the same session) then try to compare an Azure DB. I find there is some strange lock preventing login on the Azure SQL DB.
Shut down Visual Studio and reopen and it should connect ok.

Chef chef-validator.pem security

Hi I am setting up a cluster of machines using chef at offsite locations. If one of these machines was stolen, what damage can the attacker do to my chef-server or other nodes by having possession of chef-validator.pem ? What other things can they access through chef? Thanks!
This was one of the items discussed at a recent Foodfight episode on managing "secrets" in chef. Highly recommended watching:
http://foodfightshow.org/2013/07/secret-chef.html
The knife bootstrap operation uploads this key when initializing new chef clients. Possession of this key enables the client to register itself against your chef server. That is actually its only function, once the client is up and running the validation key is no longer needed.
But it can be abused.... As #cbl has pointed out, if an unauthorized 3rd party gets access to this key they can create new clients that can see everything on your chef server that normal clients can see. It can theoretically be used to create a Denial of Service attack on your chef server, by flooding it with registration requests.
The foodfight panel recommend a simple solution. Enable the chef-client cookbook on all nodes. It contains a "delete_validation" recipe that will remove the validation key and reduce your risk exposure.
The validator key is used to create new clients on the Chef Server.
Once the attacker gets hold of it, he can pretend he's a node in your infrastructure and have access to the same information any node has.
If you have sensitive information in an unencrypted data bag, for example, he'll have access to that.
Basically he'll be able to run any recipe from any cookbook, do searches (and have access to all your other nodes' attributes), read data bags, etc.
Keep that in mind when writing cookbooks and populating the other objects in the server. You could also somehow monitor the chef server for any suspicious client creation activity, and if you have any reason believe that the validator key has been stolen, revoke it and issue a new one.
It's probably a good idea to rotate the key periodically as well.
As of Chef 12.2.0 the validation key is no longer required:
https://blog.chef.io/2015/04/16/validatorless-bootstraps/
You can delete your validation key on your workstation and then knife will use your user credentials to create the node and client.
There's also some other nice features of this since whatever you supply for the run_list and environment is also applied to the node when it is created. No more relying on the first-boot.json file to be read by the chef-client and the run having to complete before the node.save creates the node at the end of the bootstrapping process.
Basically, chef-client uses 2 mode authentication for to the server :-
1) organization validator.pem and
2) user.pem
Unless and until there is the correct combination of these 2 keys. chef-client wont be able to authenticate with the chef server.
They can even connect any node to the chef server with the stolen key via the following steps.
Copying and pasting the validator key into /etc/chef folder on any machine
Creating client.rb file with the following details
log_location STDOUT
chef_server_url "https://api.chef.io/organizations/ORGNAME"
validation_client_name 'ORGNAME-validator'
validation_key '/etc/chef/validater.pem'
3: Run chef-client to connect to the chef server

Resources