How can we map the user's logon and logoff events - security

I have some knowledge about the events 4624(logon) and 4634(logoff). As microsoft's document suggests, we can correlate these events with the logonid.
Here my question is, is there any other efficient way to do this?
Because my need would be, I have to read events for last 30 days, and correlate logon and logonoff events to find the logon duration.
As per the document, logonid is unique between reboots on the same computer. so in that period(last 30 days) there may be the possibility of computer rebooted several times. so i suspect,there may be the possibility of duplication in logonid while we analyze the events for last 30 days.

If you want to track logon and logoff events I would suggest to use logon and logoff scripts that write this information into a database. Using such an approach you don't need to parse eventlogs (on all servers).

Related

Is polling a folder the same as polling envelopes?

Reading the Docusign API rules and limits (https://developers.docusign.com/docs/esign-rest-api/esign101/rules-and-limits/) they state a restriction of not polling an envelope more than once every 15 minutes. They provide this API call as an example:
GET /accounts/12345/envelopes/AAA/documents/1
My script is set to look at two folders:
/accounts/12345/folders/AAA
/accounts/12345/folders/BBB
Now both these folders may have the same envelope in each. If I'm polling those two folders in my script every 15 minutes, does that violate the Docusign polling rule since each folder may contain the same envelope?
If you're trying to create an effective polling rate of 7.5 minutes by polling each folder once every 15 minutes? Yes, that's against the rules.
But your case sounds like polling every 15 minutes and, in some cases not under your control, an envelope may be polled more often than once per 15 minutes. That's fine.
But use a webhook instead, and we'll all be much happier.
See my post on using aws (free usage tier) and we have code examples too.
For your use case, investigate Recipient connect webhooks since you're not the sender.

What are the Docusign API resource/rate limits? Are they active only in production?

I found this document explaining what the resource/rate limits are for the docusign API. https://developers.docusign.com/esign-rest-api/guides/resource-limits However I didn't get any errors related to resource limits during development and testing. Are these only active in production environment? Is there a way to test these limits during development to make sure the application will work correctly in production? Is this document valid/up to date?
Update (I just also want to expand my question here too)
So there is only ONE TYPE of limit? 1000 calls per hour and that's it? Or do I also need to wait 15 minutes between requests to the same URL?
If the second type of limitation exists (multiple calls to the same URL in an interval of 15 minutes) does it apply only to GET requests? So I can still create/update envelopes multiple times in 15 minutes?
Also if the second type of limit exists can I test it in the sandbox environment somehow?
The limits are also active in the sandbox system.
Not all API methods are metered (yet).
To test, just do a lot of calls and you'll see that the limits are applied. Eg do 1,000 status calls in an hour. Or create 1,000 envelopes and you'll be throttled.
Added
Re: only one type of limit?
Correct. Calls per hour is the only hard limit at this time. If 1,000 calls per hour is not enough for your application in general, or not enough for a specific user of your application, then there's a process for increasing the limit.
Re: 15 minute per status call per envelope.
This is the polling limit. An application is not well behaved if it polls DocuSign more than once every 15 minutes per envelope. In other words, you can poll for status on envelope A once per 15 minutes and also poll once every 15 minutes about envelope B.
The polling limit is monitored during your application's test as part of the Go Live process. It is also soft-monitored once the app is in production. In the future, the monitoring of polling for production apps will become more automated.
If you have a lot of envelopes that you're polling on then you might also run into the 1,000 calls per hour limit.
But there's no need to run into issues with polling: don't poll! Instead set up a webhook via DocuSign Connect or eventNotification and we'll call you.
Re: limits for creating/updating an envelope and other methods
Only the (default) 1,000 calls per hour affects non-polling methods.
Eg, asking for the status of an envelope's recipients, field values, general status, etc, over and over again is polling. Creating /updating envelopes can be done as often as you want (up to the default of 1,000 per hour).
If you want to create more than 1,000 envelopes per hour, we'll be happy to accommodate you. (And many of our larger customers do exactly that.)
The main issue that we're concerned with is unnecessary polling.
There can be other unnecessary calls which we'd also prefer to not have. For example, the OAuth:getUser call is only needed once per user login. It shouldn't be repeated more often than that since the information doesn't change.

Calling external API only when new data is available

I am serving my users with data fetched from an external API. Now, I don't know when this API will have new data, how would be the best approach to do that using Node, for example?
I have tried setInterval's and node-schedule to do that and got it working, but isn't it expensive for the CPU? For example, over a day I would hit this endpoint to check for new data every minute, but it could have new data every five minutes or more.
The thing is, this external API isn't ran by me. Would the only way to check for updates hitting it every minute? Is there any module that can do that in Node or any approach that fits better?
Use case 1 : Call a weather API for every city of the country and just save data to my db when it is going to rain in a given city.
Use case 2 : Send notification to the user when a given Philips Hue lamp is turned on at the time it is turned on without having to hit the endpoint to check if it is on or not.
I appreciate the time to discuss this.
If this external API has no means of notifying you when there's new data, then the only thing you can do is to "poll" it to check for new data.
You will have to decide what an "efficient design" for polling is in your specific application and given the type of data and the needs of the client (what is an acceptable latency for new data).
You also need to be sure that your service is not violating any terms of service with your polling scheme or running afoul of rate limiting that may deny you access to the server if you use it "too much".
Would the only way to check for updates hitting it every minute?
Unless the API offers some notification feature, there is no other scheme other than polling at some interval. Polling every minute is fairly quick. Do your clients really need information that is less than a minute old? Or would it really make no difference if the information was as much as 5 minutes old.
For example, in your example of weather, a client wouldn't really need temperature updates more often than probably every 10-15 minutes.
Is there any module that can do that in Node or any approach that fits better?
No. Not really. You'll probably just use some sort of timer (either repeated setTimeout() or setInterval() in a node.js app to repeatedly carry out your API operations.
Use case: Call a weather API for every city of the country and just save data to my db when it is going to rain in a given city.
Trying to pre-save every possible piece of data from an external API is probably a losing proposition. You're essentially trying to "scrape" all the data from the external API. That is likely against the terms of service and will likely also run afoul of rate limits. And, it's just not very practical.
Instead, you will probably want to fetch data upon demand (when a client requests data for Phoenix, then, and only then, do you start collecting data for Phoenix) and then once a demand for a certain type of data (temperatures in a particular city) is established, then you might want to pre-cache that data more regularly so you can notify clients of changes. If, after awhile, no clients are asking for data from Phoenix, you stop requesting updates for Phoenix any more until a client establishes demand again.
I have tried setInterval's and node-schedule to do that and got it working, but isn't it expensive for the CPU? For example, over a day I would hit this endpoint to check for new data every minute, but it could have new data every five minutes or more.
Making a remote network request is not a CPU intensive operation, even if you're doing it every minute. node.js uses non-blocking networking so most of the time during a network request, node.js isn't doing anything and isn't using the CPU at all. The only time the CPU would be briefly used is when you first send the API request and then when you receive back the result from the API call and need to process it.
Whether you really need to "poll" every minute depends upon the data and the needs of the client. I'd ask yourself if your app will work just fine if you check for new data every 5 minutes.
The method I would use to update would be contained outside of the code in a scheduled batch/powershell/bash file. In windows you can schedule tasks based upon time of day or duration since last run, so what you could do is run a simple command that will kill your application for five minutes, run npm update, and then restart your application before closing the shell.
That way you're staying out of your API and keeping code to a minimum, and if your code is inside that Node package in the update, it'll be there and ready once you make serious application changes or you need to take the server down for maintenance and updates to the low-level code.
This is a light-weight solution for you and it's a method I've used once or twice at my workplace. There are lots of options out there, and if this isn't what you're looking for I can keep looking out for you.

Thermostat to nest.com connection rate

I'm using REST GET calls from a google script to build a temperature profile of my house during the day. The function triggers every 15min. last_connection (and the rest of the data) will sometimes be the same 3-4 calls in a row, other times can be different each time for several hours running, suggesting variable rates at which the thermostat sends data up to the server.
Does anyone know what governs the thermostat's connections to nest.com or if there is a way to force a connection in order to get an up to date profile?
The thermostat connects to Nest's cloud under the following circumstances:
A 'significant' even has occurred (ie the furnace turning on)
A timeout has occurred (ie a scheduled check in appointment)
A thermostat will be considered offline if it misses its check in window, there is more detail on why that would happen in Nest's Troubleshooting Offline Status in the Nest apps support article.
You can force a thermostat to come online by sending a change to it, for example changing the target temperature will necessarily force the thermostat to wake up so the new value can be set, while awake the thermostat will update the cloud service with updated information. Forcing a thermostat to wake as way to get updated data from the thermostat is not recommended as you will run into an API rate limit designed to protect the battery on the thermostat. Charging rates on thermostats are rather limited, wake it too often and it will go offline for a while, annoying the user.
Rest assured, if the ambient temperature or humidity changes by a 'significant' amount, the thermostat will wake up and update the cloud service. The thresholds of what signifies a significant amount are harder to predict as they are partially determined by charging rate. If you want to know why that can vary, Nest has filed a patent which goes into great detail.

Throttling login attempts

(This is in principal a language-agnostic question, though in my case I am using ASP.NET 3.5)
I am using the standard ASP.NET login control and would like to implement the following failed login attempt throttling logic.
Handle the OnLoginError event and maintain, in Session, a count of failed login attempts
When this count gets to [some configurable value] block further login attempts from the originating IP address or for that user / those users for 1 hour
Does this sound like a sensible approach? Am I missing an obvious means by which such checks could be bypassed?
Note: ASP.NET Session is associated with the user's browser using a cookie
Edit
This is for an administration site that is only going to be used from the UK and India
Jeff Atwood mentioned another approach: Rather than locking an account after a number of attempts, increase the time until another login attempt is allowed:
1st failed login no delay
2nd failed login 2 sec delay
3rd failed login 4 sec delay
4th failed login 8 sec delay
5th failed login 16 sec delay
That would reduce the risk that this protection measure can be abused for denial of service attacks.
See http://www.codinghorror.com/blog/archives/001206.html
The last thing you want to do is storing all unsuccessful login attempts in a database, that'll work well enough but also makes it extremely trivial for DDOS attacks to bring your database server down.
You are probably using some type of server-side cache on your webserver, memcached or similar. Those are perfect systems to use for keeping track of failed attempts by IP address and/or username.  If a certain threshold for failed login attempts is exceeded you can then decide to deactivate the account in the database, but you'll be saving a bunch of reads and writes to your persisted storage for the failed login counters that you don't need to persist.
If you're trying to stop people from brute-forcing authentication, a throttling system like Gumbo suggested probably works best.  It will make brute-force attacks uninteresting to the attacker while minimizing impact for legitimate users under normal circumstances or even while an attack is going on.  I'd suggest just counting unsuccessful attempts by IP in memcached or similar, and if you ever become the target of an extremely distributed brute-force attack, you can always elect to also start keeping track of attempts per username, assuming that the attackers are actually trying the same username often.  As long as the attempt is not extremely distributed, as in still coming from a countable amount of IP addresses, the initial by-IP code should keep attackers out pretty adequately.
The key to preventing issues with visitors from countries with a limited number of IP addresses is to not make your thresholds too strict; if you don't receive multiple attempts in a couple of seconds, you probably don't have much to worry about re. scripted brute-forcing.  If you're more concerned with people trying to unravel other user's passwords manually, you can set wider boundaries for subsequent failed login attempts by username.
One other suggestion, that doesn't answer your question but is somewhat related, is to enforce a certain level of password security on your end-users.  I wouldn't go overboard with requiring a mixed-case, at least x characters, non-dictionary, etc. etc. password, because you don't want to bug people to much when they haven't even signed up yet, but simply stopping people from using their username as their password should go a very long way to protect your service and users against the most unsophisticated – guess why they call them brute-force ;) – of attacks.
The accepted answer, which inserts increasing delays into successive login attempts, may perform very poorly in ASP.NET depending on how it is implemented. ASP.NET uses a thread pool to service requests. Once this thread pool is exhausted, incoming requests will be queued until a thread becomes available.
If you insert the delay using Thread.Sleep(n), you will tie up an ASP.NET thread pool thread for the duration of the delay. This thread will no longer be available to execute other requests. In this scenario a simple DOS style attack would be to keep submitting your login form. Eventually every thread available to execute requests will be sleeping (and for increasing periods of time).
The only way I can think of to properly implement this delay mechanism is to use an asynchronous HTTP handler. See Walkthrough: Creating an Asynchronous HTTP Handler. The implementation would likely need to:
Attempt authentication during BeginProcessRequest and determine the delay upon failure
Return an IAsyncResult exposing a WaitHandle that will be triggered after the delay
Make sure the WaitHandle has been triggered (or block until it has been) in EndProcessRequest
This could possibly effect your genuine users too. For ex. in countries like Singapore there are limited number of ISPs and a smaller set of IPs which are available for home users.
Alternatively , you could possibly insert a captcha after x failed attempts to thwart script kiddies.
I think you'll need to keep the count outside the session - otherwise the trivial attack is to clear cookies before each login attempt.
Otherwise a count and lock-out is reasonable - although an easier solution might be to have a doubling-timeout between each login failure. i.e. 2 seconds after first login attempt, 4 seconds after next, 8 etc.
You implement the timeout by refusing logins in the timeout period - even if the user gives the correct password - just reply with human readable text saying that the account is locked-out.
Also monitor for same ip/different user and same user/different ip.

Resources