Is there a way to determine the user’s web browser’s current default time zone from within a Vaadin 8.1 app?
Preferably get a ZoneId object rather than the legacy class TimeZone.
The WebBrowser class offers getter methods for getCurrentDate(), getDSTSavings(), getRawTimezoneOffset(), getTimezoneOffset(), and isDSTInEffect(). But none of those are a ZoneId. A mere offset is not a time zone. A time zone is a history of offset-from-UTC for a particular region, covering past, present, and future changes such as Daylight Saving Time (DST).
I am aware that ultimately the only reliable way to know the user’s desired/expected time zone is to explicitly ask them in my app. But it would be nice to have a guess as a default.
The proposed duplicate is not a duplicate, asking about getting the current date-time whereas my Question is about the time zone. Furthermore, my Question asks about the modern java.time classes while the proposed Question is about the flawed legacy date-time classes.
An alternative for querying WebBrowser (or any other Vaadin API) is to use the client IP address and use a online service to get the geo information, including ZoneId.
See related Questions:
Find time zone from IP address
How to determine a zip code and city from an IP address?
Mapping US zip code to time zone
Determine a User's Timezone
Geocode an IP address?
Related
could anyone explain to me how the DNSSEC works in a nutshell?
What I can already understand (but i do not know if it is completely correct) is:
DNS is an old protocol created in the early Internet, therefore it has flaws (e.g. no authentication). It allows attacks as Man-In-The-Middle and Cache poisoning.
The solution? The creation of the DNSSEC. A protocol that uses public key cryptography and that gives authentication and integrity to the DNS queries. It works using a chain of trust that starts in the root DNS server - the "trust" here means that you trust in the public key of the root server.
In the zone level, the process works using one or more pair of keys. First the zone server has the ZSK (zone signing key) and it signs the queried data using the private ZSK. After, it sends the public ZSK, the data (RRSET) and the signed data (RRSIG) to the DNS resolver. But now you have to trust in the public ZSK. The solution? To have another key, the KSK (key signing key). The zone signs the new set that contains the public KSK and public ZKS. After it sends that new set, the signed set and the public KSK. It guarantees the security in the zone.
But how about the whole recursive process that the DNS needs? How do we make sure that it is also secure? It is done by making the child server hash its public KSK and sending it to its parent, that stores it as a DS (delegate signature). It is done early and I don't know how. In this way, if you trust the father and the father has the child DS, if you hash the child public KSK and the result be equal to the father DS, you can trust the child. This creates the whole chain of trust. The secure entry point of this chain is in the root. You assume that you can trust in the public key of the root.
This is what I think that I understand about DNSSEC, if someone could explain better, fix what I wrote ou give more information that you think it is essential to understand DNSSEC I would be very grateful.
Also if someone could explain to me the DNSSEC architecture and key management I would be glad as well.
Thank you very much!!!!!
Your question is very broad and not very related to programming.
Like Calle said you are mostly correct already so let me just pinpoint some parts to fix.
First, the important part to remember is that all of this is based on asymetric cryptography: each key is a public and private part. The public part is published in DNSKEY RRs, and also with some hashing in DS records, while the private part is used to compute the RRSIG records. Anyone using the public key would be able to verify that the signatures in the RRSIG records were indeed signed by some specific private key, while never having access to it.
Now on some of your points:
In the zone level, the process works using one or more pair of keys. First the zone server has the ZSK (zone signing key) and it signs the queried data using the private ZSK. After, it sends the public ZSK, the data (RRSET) and the signed data (RRSIG) to the DNS resolver.
At the autoritative nameservers for a given zone, you start with the content of the zone, unsigned. In the old days without DNSSEC this was exactly what was published.
Now you have these three options at least:
an external process takes the zone and signs it; it all depends on how your keys are managed: if they are in an HSM then you need to send the records there to be signed as, by design, the private key (needed to sign records) never leaves the hardware module. See the OpenDNSSEC software for example.
the resolver itself, on loading or through separate tools can sign the zone itself and even manage the keys (as they are changed regularly); see this example in Bind.
or, nothing is signed beforehand, and the resolver will generate (and cache for some time) the RRSIG records at the moment the query arrive.
See how one big provider does it.
Of course, each solution has its advantages and drawbacks. They all exist on the field.
But now you have to trust in the public ZSK. The solution? To have another key, the KSK (key signing key).
The KSK/ZSK split is not really about trust, just key material management.
Let us go back a little. The whole setup of DNSSEC could work theoretically exactly the same way with only one key in each zone.
But in practice it is often more keys.
First, keys need to be changed regularly. There is no specific reason or end of life for them, it is just the assumption that if we want to defend against offline keys cracking we just need to change them regularly. Keys are published without details on their duration (contrary to signatures in RRSIGs) but each DNSSEC signed zone should have a DPS (DNSSEC Practice Statement) that delves, among other things, into the duration of validity for each key (see this other answer of me for more details on DPS).
Of course to prepare for key rollovers, you publish a new key in the DNS in advance, for cache to learn about it, and before starting to sign with it (or at least before stopping to sign with the older one).
So you already can have to handle multiple keys at the same time.
Now you are in the middle of two opposite constraints: you would like to keep using one key for as long as possible to have less handling (and if the key is handled externally the less you have to use it the better it is) since it also exists at the parent zone through the DS record, and as the same time you know that for better security setup you need to use it for a time period as short as possible and renew it often.
You resolve this conendrum by doing a KSK/ZSK split.
Why? Because you can attach different lifetime to each key. The KSK is the one that will also exist in the parent zone through the DS record so typically something that you do not want to change too often. Typically a KSK will be the most secure key (the most protected one) and will "last" for 1 or 2 years (this has to be detailed in the DPS). Then, ZSKs, used for really signing the records in the zone can be keys generated more often, like 1 or 2 months, since changes in them need only to be reflected in the zone itself, there is no need to change anything at the parent zone.
See for example the IANA Root Zone Key ceremony: there is only one root key (absolute trust), and it may change in the future (it was already planned for last October, but then get postponed); anyway twice per year there is a specific key ceremony done in some datacenter and what will exactly happen here? The DNS root zone operator, VeriSign, comes with a specific number of keys (which are the future ZSKs in fact) worth some time (basically until the next ceremony, with some margin, based on the typical ZSKs lifetime as detailed in the relevant DPS), and then the root KSK is properly used, with many witnesses at many levels, to sign those ZSKs. The KSK can then be put back away in storage and never used again (until the next ceremony) and the DNS operator can then start the publish the ZSK, one by one, with their relevant signature.
Again, this is custom but certainly not mandatory. Some zones (like .CO.UK, see how it has only one DNSKEY record) decided to use only one key, and this is called CSK for Common Signing Key, meaning it is at the same time a zone-signing key and a key-signing key (since it signs itself and it is also the one used for the parent DS).
It is done by making the child server hash its public KSK and sending it to its parent, that stores it as a DS (delegate signature). It is done early and I don't know how.
Each zone has to send to its parent, either one (or multiple) KSKs (public part of course) and let the parent compute the relevant DS to publish in its zone, or just send the DS record directly. It is the same problem at each node in the DNS tree, except for the root of course. And the child needs to do that far in advance of using the relevant key to sign anything as he often does not control how much time the parent will take before starting to publish the key.
It is the same of each node in the DNS tree, theoretically because in practice this transfer of information has to be done out-of-band from DNS (except in the CDS/CDNSKEY, see below), and this can be different at each node. It often involves at least some purely human interaction, which explains why the ZSK/KSK split is pleasing, as it lower the frequency with which you need to do anything to replace the KSK.
For example, for TLDs, they need to enter a process at IANA website, to give their new DS records and then wait some time for IANA to process and verify it before it is published in the root zone.
For 2LD (Second Level Domain Names), they typically go to their registrar and through some website or API they provide the information that the registrar will send to the registry so that it can publish it.
Nowadays, the registrar-registry dialogue is made using a protocol called EPP (Extensible Provisioning Protocol) and it has a specific extension for DNSSEC called secDNS.
This extension allows the registrar, on behalf of its client, to send either (based on registry policy):
the DS record (as 4 separate parameters)
the DNSKEY record (as 4 separate parameters), from which the registry will compute itself the appropriate DS record to publish
the DS record with an enclosed (related) DNSKEY record so that the registry can double check if the DS was correctly computed and that it will indeed match some DNSKEY record that the domain has published already
(it is more or less in descending order of frequency of cases on the field).
This solves the provisioning issue. As for nodes lower down the tree you have then less and less standard mechanisms.
There is also another path, this time to use the DNS itself to provision things instead of out-of-band, with the CDS/CDNSKEY records as described in RFC 7344 and RFC 8078. The leading C stands for Child, and then you get again DS or DNSKEY because the core idea is to let a node publish such record in its own zone and then just let the parent zone do DNS queries to pick it up and then provision the parent zone with it.
Of course, this does create some problems at least for the bootstraping.
And these mechanisms are not used too much right now. Specially for the case of DNS hoster not being the registrars themselves, there are various work in progress and ideas so that external parties could influence this and make changes in the parent zone without having to intervene at the EPP level from the registrars to the registry. If you are interested in these topics, please have a look for example at https://www.dk-hostmaster.dk/en/news/cloudflare-integrates-dk-hostmasters-dnssec-setup or https://datatracker.ietf.org/doc/draft-ietf-regext-dnsoperator-to-rrr-protocol/
As for:
Also if someone could explain to me the DNSSEC architecture and key management I would be glad as well.
This is really too broad.
I do not know what you mean by the DNSSEC architecture: there is no one size fits all approach, you will need to at least consider the volume of your zone (number of records to sign), the frequency to which you resign, if not on the fly, and then the associated key rotations, and how and where your keys are stored.
The key management is not any more a point specific to DNSSEC. An X.509 PKI Certificate Authority will have almost the same problems regarding the security of its private key and how it uses it to sign others keys (which would be in fact certificates in this case).
What algorithm or set of heuristics can a server and a mobile app use so that the server can always be fairly certain that the app is used within the boundaries of a given geographic region (e.g. a country)? How can the server ensure that app users outside of the defined region can not falsely claim that they are inside the region?
You can't be 100% sure that user isn't reporting a fake location, you can only make the process of faking it as difficult as possible. You should implement several checks depending on the data you have access to:
1) user's IP address (user can use a proxy)
2) device's gps coordinates (they can be spoofed)
3) the locale of the device (isn't a reliable indicator)
One of the most secure checks (but also not 100%) is sending user an SMS with the confirmation code, which he has to type in the app.
One of the most sophisticated algorithms known to me is in the Google Play (so some apps can only be available only certain countries). It checks such parameters as IP address, user's mobile operator and several others, but there are tools (like Market Enabler) and techniques that can trick the system.
If you dont want to use Google Play or other ways, the best way (I say best because it first costs nothing performance-wise and cost-wise, and secondly it is easy to use and and thirdly you need it anyway if you expect large number of users - it provides nice tools and static cache, optimizer, analytics, user blocking, country blocking etc) is to use cloudflare.
Once you signup for a free cloudflare account, you can set up your server public IP address there so that all traffic is coming through cloudflare proxy network.
After that everything is pretty straightforward, you can install cloudflare module in your server .
In your app, you can get country code of the visitor in the global server request variable HTTP_CF_IPCOUNTRY - for example,
$_SERVER['HTTP_CF_IPCOUNTRY'] in PHP. It will give you AU for Australia. (iso-3166-1 country codes). It doesnt matter what language you use.
Coudflare IP database is frequently updated and seems very reliable to detect user's geolocation without performance overhead.
You also get free protection from attacks, get free cache and cdn features for fast-loading etc.
I had used several other ways but none of them was quite reliable.
If you app runs without a server, you cstill pout a file to a server and make a call to the remote url to get country of the user at each request.
apart from things that #bzz mentioned. you can read the wifi SSID of user wifi networks, services like http://www.skyhookwireless.com/ provides api( i think with browser plugins, i am not sure) which you can use to get location by submitting the wifi SSID.
if you need user to be within specific region all the time when using the app you ll probably end up using all the options together, in case you just need one time check, SMS based approach is the best one IMO.
for accessing wifi SSID , refer to this, still you can not be 100% sure.
What is the best approach to fetching a certified time stamp from the internet from within my app?
I have a licence file that expires at a regular period and I must make sure that the certificate is not expired.
Is such a thing even possible/ does it exist? Ideally when my app runs, it should get a secured/certified time stamp representing current time, but I want to make sure it cannot be faked by the application runner.
Are there any services that offer this out there? It can be commercial, I just don't know where to start and am looking for some pointers.
Look at Timestamping protocol (RFC 3161). It gives you secure time. To properly use the protocol you can ask the server timestamp some random hash (the server doesn't care), then validate the timestamp and if it's ok, use the time in the timestamp. That would be the most effective approach.
There was a TSP client available in BouncyCastle, if memory serves, and our SecureBlackbox product (including free CryptoBlackbox package) also includes a TSP client and powerful validation mechanisms.
We have an application that includes a voting component.
To try and minimise voter fraud we allow N number of votes from the same IP address within a specific period. If this limit is hit we ignore the IP address for a while.
The issue with this approach is if a group of people from a school or similar vote they quickly hit the number. Their voting can also occur very quickly (e.g. a user in the class asks his classmates to vote which causes a large number in a short period).
We can look to set a cookie on the user's computer to help determine if they are sharing accounts or check the user agent string and use that too.
Apart from tracking by IP, what other strategies do people use to determine if a user is a legitimate or a shared account when the actual IP is shared?
If your goal is to prevent cheating in on-line voting, the answer is: you can't, unless you use something like SSL client certificates (cumbersome).
Some techniques to make it harder would be using some kind of one time token sent trough e-mail or SMS. Every smart kid knows how to cheat control cookies using privacy mode of modern web browsers.
My boss asked me if Weblog expert (http://www.weblogexpert.com/lite.htm) is reliable in calculating the average time of the incoming visitors in a web site. Since HTTP is a stateless protocol, I think that the average time might be something left to personal interpretation. Does any one uses Weblog Expert? Is the visitor's average time reliable? Does anyone understand its criteria about how it process Apache logs to understand the average time?
From the WebLog Expert Lite help, the following definition:
Visitor - The program determines number of visitors by the IP addresses. If a request from an IP address came after 30 minutes since the last request from this IP, it is considered to belong to a different visitor. Requests from spiders aren't used to determine visitors.
That's a fairly useful heuristic to determine a visitor's visit, if all you have to go on is a timestamp and a requesting IP address. (I'm not sure how Web Log Expert determines a visitor is a spider, but it was irrelevant to my purpose.)
However, on closer inspection, I found the visitor average time to be very variable for our web app; some users request only a page or two, others are on for hours. So a single metric of "Average visit duration" might not give you a perfect understanding of your site's traffic.
I can't comment on that site in particular, but average time is usually calculated using some very clever bits of javascript.
You can set events on various parts of the page in javascript which fire off requests to servers. For example, when the user navigates away from a page or clicks on a link or closes the window the browser can send off a javascript request to their servers letting them know that the user has left. While this isn't 100% reliable, I think it provides a reasonable estimate for how long people spend there.
I get entirely different results if I change "Visitor session timeout".
Our internal network people (the majority of our visitors) all go to our website (external host) from the same IP (through our ISP), so the only way to determine a new visitor is by this Timeout. Choosing 1, 5 or 10 minutes creates very different results. HIGHLY UNRELIABLE. The only thing to do is be consistent and use the same parameters for comparative results, i.e., increased/decreased traffic. By the way, the update to WebLog Expert (version 7 -> 8) through that all out the window with entirely different counting mechanisms.