the client node clock is deliberately set to 10 hours head of kdc
then run kinit and klist from client node
kinit does not complain,and klist can see the Ticket cache
~date -s 19:20:38
~kinit -kt /etc/kuduclient.keytab kuduclient#EXAMPLE.COM
~klist
Ticket cache: KEYRING:persistent:0:0
Default principal: kuduclient#EXAMPLE.COM
Valid starting Expires Service principal
11/11/2020 09:49:23 11/12/2020 09:11:00 krbtgt/EXAMPLE.COM#EXAMPLE.COM
renew until 11/18/2020 09:11:00
Time in Kerberos is relative. First, it's supposed to be based on the UTC time zone. If the 10 hour difference is just time-zone related, then the Kerberos stack will happily convert to UTC and all is well.
Second, many (most?) Kerberos stacks don't care about exact time, they care about the time relative to when the KDC thinks it is. What I mean by this is, the client can make a request to the KDC, and if the time is significantly out, the KDC will return an error including what it thinks is the current time. The client is free to resend the request with it's time augmented to be within the KDC's window. This still guarantees security correctness because the time constraints are still met from the perspective of the authority -- the KDC.
Related
When I launch Jupyter lab session it is not logging me off .I observed that the inactive session timeout for the web application is not adequate. After testing for about one hour, I concluded that the inactive session timeout was either not configured or longer than 1 hour. I want to set session timeout for this problem, So that session must logoff when user is away for a longer period of time.
1-What is the name of that file?
2-where the file is located?
3-what parameters need to change to resolve (No session timeout).
Kindly provide answer.
I’m currently working on a docker-compose setup that can be used to deploy a cluster of CouchDB 2 nodes. I’ve finally got the nodes working and the data syncing across nodes, but unless I am mistaken, it appears that CouchDB does not sync user sessions.
My setup has 3 nodes and uses an haproxy setup almost identical to haproxy.cfg. As per my configuration, haproxy routes incoming traffic on port 5984 to port 5984 on all 3 nodes.
Assume an admin username of root and password of password.
I first log in with:
curl -vX POST http://localhost:5984/_session -H 'Content-Type: application/x-www-form-urlencoded' -d 'name=root&password=password'
Note the returned AuthSession is used below as AUTHSESSION.
Then, I issue the following:
curl -X PUT http://localhost:5984/mydb --cookie AuthSession=AUTHSESSION -H "X-CouchDB-WWW-Authenticate: Cookie" -H "Content-Type: application/x-www-form-urlencoded"
This usually fails with “You are not a server admin.” I can continue to issue the same PUT and it will eventually succeed as I assume that haproxy eventually routes the request to the single node with which I am authenticated. As haproxy is using round robin there is a 1 in 3 chance that I will hit the target node.
I would think that CouchDB 2 could handle syncing user sessions across nodes. Am I making a silly assumption here?
(Please see run cluster via docker-compose to replicate my setup)
Update with specific solution for my docker-compose setup
As per #lossleader, you need to set the secret in the [couch_httpd_auth] section so that it is identical across nodes. Moreover, you need to set the same admin username and password in the [admins] section. The detail I missed here is that all nodes must have the exact same password hash in the .ini file. Having the same cleartext password is not enough, as otherwise, each node will generate its own salt and generate a different hash.
See run cluster via docker-compose for my complete setup.
Short answer: yes.
Long answer:
As others have commented, couchdb doesn't know the sessions its made, so it's true there's no mechanism to sync sessions themselves, but there are two non-session things that you need to sync yourself before a session cookie made on one node of a cluster will be valid on any other.
[couch_httpd_auth]
secret = foo
This is the secret value used to sign session cookies. If not present when a session cookie is requested, it is set to a random value. Each node of a cluster will, naturally, generate a different value.
So, before startup, arrange for this value to be set to a large, random value but the same on all nodes of your cluster.
[admins]
foo = -pbkdf2-2cbae77dc3d2dadb43ad477d312931c617e2a726,cd135ad4d6eb4d2f916cba75935c3ce7,10
This section contains the salted password hashes of each admin user. The salt is included in the signature in the session cookies. On password change, the salt is re-randomized, so the effect of including salt is that it invalidates the session cookies from before the password change.
You also need this section to be the same on all nodes. Each node will generate a random salt when hashing the admin password.
Better to generate this section externally as part of your node provisioning automation.
I hope that gets you started. We would like to improve this situation in a future release, it obviously reflects the pre-clustering versions of couchdb.
CouchDB session tokens are just an HMAC hash of the user's password salt, the server secret, and the time. Sessions aren't stored in CouchDB at all, even on a single-node system. So there is nothing to sync.
You can, and many people do, generate sessions entirely programatically, external to CouchDB.
How frequently the Traffic Manager monitors endpoints? It's very obvious that it's not event driven (when an endpoint is down it takes up-to 30 secs - 2.5 mins to identify the status of the endpoint as per my observations). Can we configure this frequency, I cannot see any configuration for this.
Is there a relationship between Traffic Manager Monitoring interval and TTL?
This may look like a general question, but my real issue is that I experience a service downtime in a fail over scenario (fail over of the primary). I understand the effect in TTL where until the client DNS cache expires they are calling the cached endpoint. I spent a lot of time on this and now I have narrowed down it to a specific question.
Issue is that there is a delay in Traffic Manager identifying the endpoint status after it's stopped or started. I need a logical explanation for this, could not find any Azure reference which explains this.
Traffic manager settings
I need to understand this delay and plan for that down time.
I have gone through the same issue. Check this link, it explains the Monitoring behaviour
Traffic Manager Monitoring
The monitoring system performs a GET, but does not receive a response in 10 seconds or less. It then performs three more tries at 30 second intervals. This means that at most, it takes approximately 1.5 minutes for the monitoring system to detect when a service becomes unavailable. If one of the tries is successful, then the number of tries is reset. Although not shown in the diagram, if the 200 OK message(s) come back more than 10 seconds after the GET, the monitoring system will still count this as a failed check.
This explains the 30-2 mins delay.
basically the maximum delay would be 1.5 mins + TTL as per the details.
I am trying to get an interval function, pre-programmed on separate devices, to sync up with each other. So several mobile devices all running the same interval function in sync. At first I thought I could just use the devices internal clocks and start the functions on the 0 of the minute. I realize now that mobile clocks aren't really all that accurate and are not synced to each other. Now I need a new solution.
I'm using heroku, node.js, socket.io, and ionic if that helps at all.
Conceptually, you could do the following:
Have each client establish a time reference vs. a common server.
Send a color changing message to each client with a specific timestamp a short time in the future when the color changing effect is to be started.
When each client receives the color changing message, it looks at the scheduled timestamp a short time in the future, corrects for the time reference and schedules the event vs. its own clock.
When that time arrives at each client, each client starts playing the event.
This will be as accurate as the time reference you establish in step 1 and that's where the tricky portion is and where the accuracy of lack of accuracy is established. There's a description of one method for doing that in this post: Measuring time difference between networked devices.
Once you establish the time delta between the client clock and the reference clock, you store that delta locally and you can apply that delta to any future time directives from the server. So, if you receive a directive to carry out some operation at 12:30:05.00, but your client clock has been measured to be +12.33 seconds fast from the server reference, then you would subtract that +12.33 seconds from the scheduled time and set a timer that would fire at 12:29:52.67 on your local clock.
You handle the fact that the transit time to each client may not be the same by sending a directive for a specific time in the future. You can pick any time in the future, but it must not be a longer time into the future than the longest transit time to any client. You can also measure that from each client and report it to the server or if you have enough time to schedule in advance, you can just send the message with the directive at least several seconds ahead of time (longer than any transit time is likely to be in normal operating conditions).
(This is in principal a language-agnostic question, though in my case I am using ASP.NET 3.5)
I am using the standard ASP.NET login control and would like to implement the following failed login attempt throttling logic.
Handle the OnLoginError event and maintain, in Session, a count of failed login attempts
When this count gets to [some configurable value] block further login attempts from the originating IP address or for that user / those users for 1 hour
Does this sound like a sensible approach? Am I missing an obvious means by which such checks could be bypassed?
Note: ASP.NET Session is associated with the user's browser using a cookie
Edit
This is for an administration site that is only going to be used from the UK and India
Jeff Atwood mentioned another approach: Rather than locking an account after a number of attempts, increase the time until another login attempt is allowed:
1st failed login no delay
2nd failed login 2 sec delay
3rd failed login 4 sec delay
4th failed login 8 sec delay
5th failed login 16 sec delay
That would reduce the risk that this protection measure can be abused for denial of service attacks.
See http://www.codinghorror.com/blog/archives/001206.html
The last thing you want to do is storing all unsuccessful login attempts in a database, that'll work well enough but also makes it extremely trivial for DDOS attacks to bring your database server down.
You are probably using some type of server-side cache on your webserver, memcached or similar. Those are perfect systems to use for keeping track of failed attempts by IP address and/or username. If a certain threshold for failed login attempts is exceeded you can then decide to deactivate the account in the database, but you'll be saving a bunch of reads and writes to your persisted storage for the failed login counters that you don't need to persist.
If you're trying to stop people from brute-forcing authentication, a throttling system like Gumbo suggested probably works best. It will make brute-force attacks uninteresting to the attacker while minimizing impact for legitimate users under normal circumstances or even while an attack is going on. I'd suggest just counting unsuccessful attempts by IP in memcached or similar, and if you ever become the target of an extremely distributed brute-force attack, you can always elect to also start keeping track of attempts per username, assuming that the attackers are actually trying the same username often. As long as the attempt is not extremely distributed, as in still coming from a countable amount of IP addresses, the initial by-IP code should keep attackers out pretty adequately.
The key to preventing issues with visitors from countries with a limited number of IP addresses is to not make your thresholds too strict; if you don't receive multiple attempts in a couple of seconds, you probably don't have much to worry about re. scripted brute-forcing. If you're more concerned with people trying to unravel other user's passwords manually, you can set wider boundaries for subsequent failed login attempts by username.
One other suggestion, that doesn't answer your question but is somewhat related, is to enforce a certain level of password security on your end-users. I wouldn't go overboard with requiring a mixed-case, at least x characters, non-dictionary, etc. etc. password, because you don't want to bug people to much when they haven't even signed up yet, but simply stopping people from using their username as their password should go a very long way to protect your service and users against the most unsophisticated – guess why they call them brute-force ;) – of attacks.
The accepted answer, which inserts increasing delays into successive login attempts, may perform very poorly in ASP.NET depending on how it is implemented. ASP.NET uses a thread pool to service requests. Once this thread pool is exhausted, incoming requests will be queued until a thread becomes available.
If you insert the delay using Thread.Sleep(n), you will tie up an ASP.NET thread pool thread for the duration of the delay. This thread will no longer be available to execute other requests. In this scenario a simple DOS style attack would be to keep submitting your login form. Eventually every thread available to execute requests will be sleeping (and for increasing periods of time).
The only way I can think of to properly implement this delay mechanism is to use an asynchronous HTTP handler. See Walkthrough: Creating an Asynchronous HTTP Handler. The implementation would likely need to:
Attempt authentication during BeginProcessRequest and determine the delay upon failure
Return an IAsyncResult exposing a WaitHandle that will be triggered after the delay
Make sure the WaitHandle has been triggered (or block until it has been) in EndProcessRequest
This could possibly effect your genuine users too. For ex. in countries like Singapore there are limited number of ISPs and a smaller set of IPs which are available for home users.
Alternatively , you could possibly insert a captcha after x failed attempts to thwart script kiddies.
I think you'll need to keep the count outside the session - otherwise the trivial attack is to clear cookies before each login attempt.
Otherwise a count and lock-out is reasonable - although an easier solution might be to have a doubling-timeout between each login failure. i.e. 2 seconds after first login attempt, 4 seconds after next, 8 etc.
You implement the timeout by refusing logins in the timeout period - even if the user gives the correct password - just reply with human readable text saying that the account is locked-out.
Also monitor for same ip/different user and same user/different ip.