How to find the source of audit failure in Windows server? - windows-server-2012

I get audit failure messages in the security event logs, every second.
Event id: 4625
logon type: 3
Process name: lasass.exe
failed login: schOPSSH
The COPSSH (SSH for Windows) was installed on the machine and its user was svcOPSSH, not schOPSSH. So, I thought that the person who installed misconfigured it, I stopped the SSH service, even I removed the software and deleted all users except admin, but I still get login attempts with that username. I checked all services and its credentials, everything was okay. I search the registry with the user "schOPSSH", couldn't find any record.
Do you have any idea to find the source of this login attempt? Thanks.

I figured it out. In my case, there was brute force attack from external IP. I didn't expect that. I used the MS Network Monitor to check incoming requests.

Related

HTTP Web Server: Agent did not complete within configured time limit

I have a web application that builds web-pages using agent (it's written in LS and we use [print html] to output HTML) and from time to time I see an error as below.
02-11-2020 10:00:18 HTTP Web Server: Agent did not complete within configured time limit [/path-to-database.nsf/web?openagent] Anonymous
02-11-2020 10:00:18 HTTP Server: Execution time limit exceeded by Agent '(Web)|Web' in database '/path-to-database.nsf'. Agent signer 'signer name'.
As a result HTTP task stuck so I have to restart it, but that means I have to monitor it all the time.
It does not seems to be related to agent time execution, otherwise I would have this issue constantly.
The activity does not seems to be the issue as well, according to google analytics it's around ~50 active users.
I doubt [Server Tasks\Agent manager] will help, because agent runs under HTTP task.
Does anybody know how to figure out what is the reason of such issue and where I have to dig to fix it.
Update
Domino version 11.0
The agent is triggered by anonymous visitor and does some relatively heavy computation to construct HTML response (loops and lookups are present, but I'm sure all loops ends properly, without infinitive run).
I guess settings for HTTP Agents are under this section (so 2 mins).
Web Agents and Web Services
Run web agents and web services concurrently? Enabled
Web agent and web services timeout: 120 seconds
In general request takes between 300ms-1 second, however there are some heavy pages with 1-5 seconds (but nothing like 10 seconds or more).
I notice the error only when we get more than 50 active users (who activity open new pages and thus trigger the agent).
I guess Richard is right and there must be some condition when agent stuck (maybe related to views update or some background process).
For now I simply restart HTTP to get this issue fixed (for some time).
So my question could be re-phrased to:
What can cause delay of the agent that build web page (taking into account it's related to 50-100 active users).
Thanks a lot :-)

Mautic is not processing the queue. Messages are in the spool/default folder

Mautic will not send my queued emails.
I have set up the cron jobs and they are running as expected. The cron job email report for the ":messages:send" cron job that runs every minute is always this...
Processing message queue
Messages sent: 0
Content-type: text/html; charset=UTF-8
I have messages in my queue, which I have sent via the Contacts Tab, by clicking on the contact name (myself) and then clicking on the Send Email button, just to send myself a test email.
In my configuration email settings I am using PHP Mail.
If I have the mail set to 'Send Immediately' it works fine. I get my test email instantly. But if I have it set to queue the message goes into my spool/default folder but when the cron job triggers it is not sent.
Things I have tried so far....
I deleted the cache folder contents
I checked to see if I have two versions of this file: SendChannelBroadcastCommand.php - I don't, I just have this file once, in the ChannelBundle/Command folder. It is not also in the CoreBundle/Command folder (as suggested by a similar post)
I deleted all of the queued messages in the spool/default folder, then sent some more... which are now sitting in the folder just like before.
Things that might be a factor?
The permissions for the file SendChannelBroadcastCommand.php is set to 644. I don't know if this is correct but assume it is.
When I open the SendChannelBroadcastCommand.php file in dreamweaver, it flags it with lots of syntax errors. I don't really know enough about code to determine if these are genuine errors or if Dreamwaever is just being a little too sensitive. I also don't know if this file in included inside another one that'd make those errors disappear if Dreamweaver could see the complete end result, but I thought it was worth a mention.
Things that I'm sure are not a problem
I'm certain that the cron job is set up correctly. It is running. And I receive the email reports (although I've turned those back off now as I don't want a report every minute)
I've seen this problem mentioned a few times on other forums but none of the solutions are working for me.
My Mautic installation is 2.14.0
My PHP is 7.0.31
Installation was via Softaculous on cPanel on a dedicated server hosted with Namecheap
Thank you in advance for any suggestions that I can try to fix this issue.
Steve.
Oh, in case you're wondering... I am using PHP Mail as Mautic would not connect to Amazon SES. For that I get the following error (which my hosting company was unable to help me fix, so I'm trying PHP mail)
Connection could not be established with host email-smtp.us-east-1.amazonaws.com [Connection refused #111] Log data: ++ Starting Mautic\EmailBundle\Swiftmailer\Transport\AmazonTransport !! Connection could not be established with host email-smtp.us-east-1.amazonaws.com [Connection refused #111] (code: 0)
++ Starting Mautic\EmailBundle\Swiftmailer\Transport\AmazonTransport !! Connection could not be established with host email-smtp.us-east-1.amazonaws.com [Connection refused #111] (code: 0)
Regarding your Amazon SES Connection refused #111] (code: 0) error, it is hard-coded in mautic to use port 2587 to connect to amazon ses, regardless of what port you put in the smtp port number. This is in Mautic 2.13.1 version. Make sure TCP 2587 in/out is open on your webserver firewall. This change solved that error message for me. Have not expierenced a queue error, sorry can't comment on that.
If your queue setup is perfect then there is one more thing is there to setup in cron level.
That is php /path/to/mautic/bin/console mautic:emails:send
This command is used to process queued emails for Mautic
Check this for more info https://docs.mautic.org/en/setup/cron-jobs
Due to this only the emails are not processing, if your queue setup is good.
Just add the command and try again, it will work.

Ansible, windows - how to access network folder?

I want to use Ansible to automate my deployment process. Let me say few words about it. Deployment process in my case consists of two steps:
update DB (SQL Script)
copy predefined set of files to various network folders (on different machines)
For this purpose I use special selfwritten program called Installer.exe. If I run it myself it performes operations with my credentials. So it has all my rights, e.g. access to network folders and SQL Databese.
I want to use Ansible as wrapper for my program (Installer.exe), not instead of it. My target scenario - Ansible prepares configuration files and runs my installer on remote windows machine. I've faced a problem - my program run by Ansible hasn't my full rights. It can successfully access SQL Database 1 on the same machine, but can't access SQL Database 2 on remote machine or access network folder. I always get "access denied" on networks access, SQL Database says something about NT AUTHORITY\ANONYMOUS LOGON. It looks like double hop problem, but not exactly it as far as I understand it. Double hop is about service accounts, but I am trying to access remote server with my own personal accouns.
UPD 1:
My variables for that group are:
ansible_user: qtros#ABC.RU
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
ansible_winrm_operation_timeout_sec: 120
ansible_winrm_read_timeout_sec: 150
ansible_winrm_transport: kerberos
ansible_winrm_kerberos_delegation: yes
Before any actions with Ansible I run the following command:
$> kinit qtros#ABC.RU
and enter my password. Later if run klist I can see some valid tickets. I intended to use domain account, but not local system account. Am I doing it right?
UPD 2: if I add such command in playbook:
...
raw: "klist"
...
I get something like:
fatal: [targetserver.abc.ru]: FAILED! => {"changed": true, "failed": true, "rc": 1, "stderr": "", "stdout": "\r\nCurrent LogonId is 0:0x20265db4\r\nError calling API LsaCallAuthenticationPackage (ShowTickets substatus): 1312\r\n\r\nklist failed with 0xc000005f/-1073741729: A specified logon session does not exist. It may already have been terminated.\r\n\r\n\r\n", "stdout_lines": ["", "Current LogonId is 0:0x20265db4", "Error calling API LsaCallAuthenticationPackage (ShowTickets substatus): 1312", "", "klist failed with 0xc000005f/-1073741729: A specified logon session does not exist. It may already have been terminated.", "", ""]}
Based on your problem statement, it sounds like the Windows machine is running installer.exe under the Local System account, which has no rights outside of the Windows machine itself and will always fail trying to run any procedure on SQL Database 2. This wouldn't be a Kerberos double-hop scenario. For one, there's only one hop between the Windows machine in the middle of the diagram running installer.exe and SQL Database 2. Since your Ansible program is wrapping up installer.exe inside of it, then unless I'm missing something, run the Ansible program on the Windows machine with AD domain credentials having the appropriate rights to SQL Database 2.
EDIT: As the focus of your question was based on resolving the SQL Database 2 message regarding NT AUTHORITY\ANONYMOUS LOGON, and whether or not this was a Kerberos double hop problem (doesn't look like it), that's what I answered on. Note you have ansible_user defined but not ansible_ssh_pass. There's an apparent bug in the documentation (http://docs.ansible.com/ansible/intro_windows.html), so use ansible_ssh_pass instead of ansible_ssh_password.

Beanstalkd / Pheanstalk security issue

I have just started using beanstalkd and pheanstalk and I am curious whether the following situation is a security issue (and if not, why not?):
When designing a queue that will contain jobs for an eventual worker script to pick up and preform SQL database queries, I asked a friend what I could do to prevent an online user from going into port 11300 of my server, and inserting a job into the queue himself and hence causing the job to be executed with malicious code. I was told that I could include a password inside the job being sent.
Though after some time passed, I recognized that someone could preform a few simple commands on a terminal and obtain the job inside the queue, and hence find the password, and then create jobs with the password included:
telnet thewebsitesipaddress 11300 //creating a telnet connection
list-tubes //finding which tubes are currently being used
use a_tube_found //using one of the tubes found
peek-ready //see whats inside one of the jobs and find the password
What could be done to make sure this does not happen and my queue doesn't get hacked / controlled?
Thanks in advance!
You can avoid those situations by placing beanstalkd behind a firewall or in a private network.
DigitalOcean (for example) offers such a service where you have a private network IP address which can be accessed only from servers of the same location.
We've been using beanstalkd in our company for more than a year, and we haven't had any of those issues yet.
I see, but what if the producer was a page called index.php, where when someone entered it, a job would be sent to the queue. In this situation, wouldn't the server have to be an open network?
The browser has no way to get in contact with the job server, it only access the resources /you/ allow them to, that is the view page. Only the back-end is allowed to access the job server. Also, if you build the web application in a certain way that the front-end is separated from the back-end, you're going to have even less potential security issues.

Chef chef-validator.pem security

Hi I am setting up a cluster of machines using chef at offsite locations. If one of these machines was stolen, what damage can the attacker do to my chef-server or other nodes by having possession of chef-validator.pem ? What other things can they access through chef? Thanks!
This was one of the items discussed at a recent Foodfight episode on managing "secrets" in chef. Highly recommended watching:
http://foodfightshow.org/2013/07/secret-chef.html
The knife bootstrap operation uploads this key when initializing new chef clients. Possession of this key enables the client to register itself against your chef server. That is actually its only function, once the client is up and running the validation key is no longer needed.
But it can be abused.... As #cbl has pointed out, if an unauthorized 3rd party gets access to this key they can create new clients that can see everything on your chef server that normal clients can see. It can theoretically be used to create a Denial of Service attack on your chef server, by flooding it with registration requests.
The foodfight panel recommend a simple solution. Enable the chef-client cookbook on all nodes. It contains a "delete_validation" recipe that will remove the validation key and reduce your risk exposure.
The validator key is used to create new clients on the Chef Server.
Once the attacker gets hold of it, he can pretend he's a node in your infrastructure and have access to the same information any node has.
If you have sensitive information in an unencrypted data bag, for example, he'll have access to that.
Basically he'll be able to run any recipe from any cookbook, do searches (and have access to all your other nodes' attributes), read data bags, etc.
Keep that in mind when writing cookbooks and populating the other objects in the server. You could also somehow monitor the chef server for any suspicious client creation activity, and if you have any reason believe that the validator key has been stolen, revoke it and issue a new one.
It's probably a good idea to rotate the key periodically as well.
As of Chef 12.2.0 the validation key is no longer required:
https://blog.chef.io/2015/04/16/validatorless-bootstraps/
You can delete your validation key on your workstation and then knife will use your user credentials to create the node and client.
There's also some other nice features of this since whatever you supply for the run_list and environment is also applied to the node when it is created. No more relying on the first-boot.json file to be read by the chef-client and the run having to complete before the node.save creates the node at the end of the bootstrapping process.
Basically, chef-client uses 2 mode authentication for to the server :-
1) organization validator.pem and
2) user.pem
Unless and until there is the correct combination of these 2 keys. chef-client wont be able to authenticate with the chef server.
They can even connect any node to the chef server with the stolen key via the following steps.
Copying and pasting the validator key into /etc/chef folder on any machine
Creating client.rb file with the following details
log_location STDOUT
chef_server_url "https://api.chef.io/organizations/ORGNAME"
validation_client_name 'ORGNAME-validator'
validation_key '/etc/chef/validater.pem'
3: Run chef-client to connect to the chef server

Resources