Is it possible to secure the communication between filebeat and logstash with a token of some kind?
I know that it is possible to secure the filebeat --> logstash connection through HTTPS mutual authentication, but I feel they are pretty hard to manage if we have many different filebeat clients (prove me wrong and I'll happily change my mind).
I am also aware that it is possible to secure the logstash --> elastic connection with API keys, but that's not what I need, I need securing from filebeat to logstash.
Use case
I'm setting up a centralized ELK stack for log analytics that will be used to collect logs from a variety of different systems, some may be actual servers, most will be developer's workstations.
I would like to setup a system in which a developer logs into a secured internal service, asks for a token, and start streaming logs from her/his workstation right away.
If a breach is detected I would like it to be a simple process: it should be enough to invalidate the previous token that has been leaked and issue a new one, and then removing from ES the unwanted log entries streamed by the leaked token.
It is not possible, the only way to secure the communication between filebeat and logstash is using SSL certificates.
The closest you can get of what you want is if you send the filebeat logs directly to elasticsearch, then you would be able to use an API key, this needs security enabled in elasticsearch.
In both cases you would need to configure the filebeat.yml file for the developer's workstation.
Related
I'm following the guide here to setup Application Insight telemetry on a frontend web form. I wish to use the snippet-based setup. I notice, however, that it requires me to embed the connection url in the html page. Is that a security issue?
There would be nothing to stop a malicious user from using browser dev tools to grab that url and then send any API calls to that url. Should I be concerned about this? If so, what is the recommended approach for securing this connection url.
... what is the recommended approach for securing this connection url.
There is none. For now you have to accept it is visible somehow. See also this open issue regarding the topic
Should I be concerned about this?
Not so much. The instrumentation key cannot be used to read any telemetry. However, it could be used to send bogus telemetry to your application insigths resource. This could lead to higher costs depending on the amount of data ingested and it could clutter your logs, possible masking possibly relevant telemetry.
Unless the application is hosted on a vnet integrated resource you cannot restrict access to application insights resource. If it is, then you can set application insights to deny queries or ingestion from external sources in the network isolation setting.
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/private-link-security
So even if someone gets the url they cannot access.
I'm setting a server as a windows virtual desktop host pool. Is there a way to keep track of who is accessing the server?
Now I am using Azure Log Analytics workspaces to connect with VM, I've tried to find some queries to get the information of who is accessing the server.
Query:
VMConnection
|where computer == 'TestVM01'
I except the output of who is accessing the server, but I don't know how to write the query. If you know some information about it, please share your idea here, thanks so much.
Accessing the VM, or logging on to the VM?
Logging on to the VM would be OS logging, eg Windows Event Audit Logs or whatever the Linux logs are.
Accessing the VM would be network traffic, or in other words: Your NSG flow logs for which you would need to have Network Watcher configured. You would be able to see which source tries to access at which point in time on which port (RDP/SSH)
So you could relate both of those logs with each other by matching the time-stamps.
AFAIK the portal and thus the logs from Azure itself, don't keep track of which person tries to access a VM for logon.
I believe it is possible to have Windows Event Logs sent to Log Analytics, or actually, have Log analytics fetch the Event logs, no clue for Linux logs. So that's one part of the whole, I do not know if you can do the same for the NSG flow logs.
I'm currently performing a research on cloud computing. I do this for a company that works with highly private data, and so I'm thinking of this scenario:
A hybrid cloud where the database is still in-house. The application itself could be in the cloud because once a month it can get really busy, so there's definitely some scaling profit to gain. I wonder how security for this would exactly work.
A customer would visit the website (which would be in the cloud) through a secure connection. This means that the data will be passed forward to the cloud website encrypted. From there the data must eventually go to the database but... how is that possible?
Because the database server in-house doesn't know how to handle the already encrypted data (I think?). The database server in-house is not a part of the certificate that has been set up with the customer and the web application. Am I right or am I overseeing something? I'm not an expert on certificates and encryption.
Also, another question: If this could work out, and the data would be encrypted all the time, is it safe to put this in a public cloud environment? or should still a private cloud be used?
Thanks a lot!! in advance!!
Kind regards,
Rens
The secure connection between the application server and the database server should be fully transparent from the applications point of view. A VPN connection can connect the cloud instance that your application is running on with the onsite database, allowing an administrator to simply define a datasource using the database server's ip address.
Of course this does create a security issue when the cloud instance gets compromised.
Both systems can live separately and communicate with each other through a message bus. The web site can publish events for the internal system (or any party) to pick up and the internal system can publish events as well that the web site can process.
This way the web site doesn't need access to the internal database and the internal application doesn't have to share more information than is strictly necessary.
By publishing those events on a transactional message queue (such as MSMQ) you can make sure messages are never lost and you can configure transport level security and message level security to ensure that others aren’t tampering messages.
The internal database will not get compromised once a secured connection is established with the static Mac ID of the user accessing the database. The administrator can provides access to a Mac id through one time approval and add the user to his windows console.
We're having trouble passing a token between our system (IIS) and an affiliate mall site using IIS Web Services.
I need to see the data that's being passed to debug the issue, but the traffic is encrypted. I understand the scenario of using either Wireshark or Fiddler2 if I had the private key...I doubt the partner will supply us with theirs...
So, is there a logging method on IIS7 or debug mode I can use to decrypt the traffic, or is there another method without the need of decryption. My feeling is we'll have to see if the partner has a debug mode they can flag so we can temporarily see the data in the packets.
Scenario: We have our dedicated servers hosted with a hosting provider. They are running web apps, console apps along with the database which is Sql Server Express edition.
The applications encrypt/decrypt the data to/from the DB. We also store the keys in their server. So theoretically, the hosting provider can access our keys and decrypt our data.
Question: How we can prevent the hosting providers to access our data?
We don't want hosting provider's users to just log into Sql Server and see the data.
We don't want an un-encrypted copy of database files in the box.
To mitigate no. 1: Encrypting app.configs to not store plain text DB username and password.
To mitigate no. 2: Turn on EFS on Sql Server data folder. We could use TDE but the Sql Server is Web Edition version and the hosting company is going to charge us a fortune to use Enterprise Edition.
I'd really appreciate if you guys have any suggestions about above.
You can help mitigate it, but prevention is probably impossible.
It's generally considered that if an attacker has physical access to the machine, they own everything on it.
If this is a concern, you should consider purchasing a server, a virtual server, or using a colocation center and providing your own machine or hosting it yourself entirely.
When you purchase a server, virtual server, or colocate your own hardware, the service provider doesn't have an account on your OS. If you use an encrypted file system, and only access your box via SSH (SSL/TLS), then they will not be able to easily access any data on your computer that isn't being sent out to the network.
The only fool proof way is to have your own hardware in your own secure location and bring the network to your box.
It's possible to do database encryption such that the client does the decryption (though if your indexes are sorted, the server obviously needs to be able to figure out relative order of things in the index). I can't think of a link off the top of my head. However, if the client is the web app, there's not much you can do.
There are also various types of homomorphic encryption, but I'm not sure there's anything that scales polynomially. In any case, the overheads are huge.
I'm curious if there's a reason why you don't trust your hosting provider - or is this just a scenario?
If this is something you have to worry about, sounds like you should be looking at other providers. Protecting yourself from your hosting partner seems counterproductive, IMO.