Update IIS webconfig - iis

I have created a C# project. Now I want to save an IP address in the web.config file with the help of key and value.
<add key="ip" value="xxxx.xxx.x.xxx" />
Now I am able to read the value with the help of key name and also able to update the value of the key. It's running in localhost successfully.
But if I put the files in IIS I can read the value but can't update the key value. Error like this is showing in Mozilla Firebug. The error is as follows
NetworkError: 500 Internal Server Error
Access to the path \u0027C:\inetpub\wwwroot\Order1\v1y5ay43.tmp\u0027 is denied.
I stuck over here. I Google it but did not find any solution.

you can't do that.
A web app cannot modify it's own config file. It is a security measure that you absolutely should NOT circumvent.
Other options:
Store the value in the session state. Or in the cache? Or in a global static hashtable?
Normally client specific stuff should be stored in the session state (if you have that enabled).
Remember that you likely have multiple clients at the same time, so you would need to keep track of that.
Note: Why would you want to store the IP? It is a property readily available to all your codebehind classes - so why store it?

Related

Database Connection security in nodejs

When I'm connecting to database in node, I have to add db name, username, password etc. If I'm right every user can access js file, when he knows address. So... how it works? Is it safe?
Node.js server side source files should never be accessible to end-users.
In frameworks like Express the convention is that requests for static assets are handled by the static middleware which serves files only from a specific folder in your solution. Explicit requests for other source files that exists in your code base are thus ignored (404 is passed down the pipeline).
Consult
https://expressjs.com/en/starter/static-files.html
for more details.
Although there are other possible options to further limit the visibility of sensitive data, note that anyone on admin rights who gets the access to your server, would of course be able to retrieve the data (and this is perfectly acceptable).
I am assuming from the question that the DB and Node are on the same server. I am also assuming you have created either a JSON or env file or a function which picks up your DB parameters.
The one server = everything (code+DB) is not the best setup in the world. However, if you are limited to it, then it depends on the DB you are using. Mongo Community Edition will allow you to set up limited security protocols, such as creating users within the DB itself. This contains a {username password rights} combination which grants scaled rights based upon the type of user you set up. This is not foolproof but it is something of protection even if someone gets a hold of your DB parameters. If you are using a more extended version of MongoDB then this question would be superfluous. As to other DB's you need to consult the documentation.
However, all that being said, you should really have a DB set up behind a public server and only allow SSH into it, with an open port to receive information from your program. As the one server = everthing format is not safe in the end run, though it is fine for development.
If you are using MongoDB, you may want to take a look at Mongoose coupled with Mongoose Encryption. I personally do not use them but it may solve your problem in the short run.
If your DB is MySQL etc. then I suggest you look at the documentation.

authentication-flows email URL's do not work after web server reset

I have been playing around with authentication-flows and noticed that when I restart the web server the URL's no longer work, they are all invalid. I walked through debugging but I am still a bit lost as to exactly why, though I have a lot of good reasons why it should happen (and I am sure you do also).
I want to make a service which will be distributed to multiple containers and when a request comes in any of them could serve it. As the solution stands right now, it looks like I will have to make modifications to make possible.
What exactly is making the URL invalid? and what changes could I make to make my proposed solution possible?
Thank you in advance.
In response to Ohard's comment:
1. Why the URL is invalid
Let me tell you how I get the error. I deploy the war, submit forgot password. Receive the email to reset my password then stop the war. When that happens my reset password page extracts the enc. I then stop and redeploy the war. After isend a rest request with the enc and a new password to the /rest/setNewPassword mapping, then receive:
09 Jan 2016 03:50:48,799 [http-nio-8082-exec-1] ERROR
web.rest.UserActionRestController - Failed to decrypt URL content
aX8uaOWkqAUQN2xOzlPAOHJjPZaxBwho7.yoMeUtMnJA
in ohadr\crypto\service\CryptoService.java there is an exception on line 261:
throw new CryptoException("Failed to decrypt URL content " +
based64EncryptedContent, e);
which I then use a break point to find:
java aes javax.crypto.BadPaddingException: Given final block not
properly padded
I am sure if you try to reproduce this issue, you will find the same results...
Note: when I do this without the re-deploy everything works great!
2. How to make auth-flows work as SaaS
There are three use cases I want this service to fulfil:
Currently, If I host a service and it goes down without a fail-over, people who have URL's will be unable to use their links when it comes back up. I want them to be able to use the links regardless.
(untested -- but will be soon) Similar to the second, If I host this service on multiple docker containers I believe that it will not be able to receive link that did not orginially come from its container, therefore containers could not share unsorted loads. It should be able to read any of the enc's and process it.
EDIT:
1. Why the URL is invalid
An even easier way to test this is just to submit a forgotten password, get the email and then stop the war. Redeploy it, then click the link. I got this stack trace:
https://drive.google.com/file/d/0Bwa-JXbjFUDueXVMWWJibjY2Zm8/view?usp=sharing
Don't worry about csrf it is not enabled.
1. Why the URL is invalid
As it looks like, the ICryptoUtil instance is re-created after you redoply the war.
CryptoService.java line 38:
return ContextLoader.getCurrentWebApplicationContext().getBean(ICryptoUtil.class);
I suggest for you to do a small test. Encrypt a string twice, now and after the redeploy and compare the results.
If you got 2 different results then your crypto is not capable to decrypt an encrypted string of another crypto instance.
#EdiZ is right.
To be more accurate, every time your web-app loads, Spring loads all the beans. Among them are Crypto's library beans, such as CryptoUtil and CryptoProvider, and if you look carefully you will notice on DefaultCryptoProvider.loadMasterKeys() that a new key is generated.
I believe that explains the behavior you see.
Currently, If I host a service and it goes down without a fail-over,
people who have URL's will be unable to use their links when it comes
back up. I want them to be able to use the links regardless
It seems to be a duplication of your first question; I think that the first issue will have to be resolved in order to make it work as you wish. If the server reboots, all the links become invalid - the users will have to click again on "forget password" (for example) and get a new link - it is for you to decide how big this deal is.
If I host this service and I do have a failover I assume the failover
will not be able to read URL that is not from it originally. It should
be able to read any of the enc's and process it.
I assume that you have to develop some more persistence, so the server can decrypt URLs that were not generated by it...
Hope that helps.

How to restrict Chrome Apps to only work on specific computers?

I'm developing a POS Client using Chrome (packaged) Apps. It will run locally on the installed computers and interact with the server via web service. This app should only run on specific computers at the stores.
I know I can go to each store and install the .crx file in which case I don't have to publish the app to Chrome Web Store. However, I want it to be published to Chrome Web Store so that I can take advantage of its auto-updating feature.
What should I do to make sure that the app can only run at the stores' computers? (I can go the the stores and setup anything needed at the first installation).
Options I have thought of:
Create some secret key and enter it to the app at the first time of running.
Build a small tool (winforms application) to generate time-based tokens and install it on the computers. The staff will need to enter the token each time opening the app.
Any better idea how to accomplish this?
You said the app needs to talk to a web service to work. That's the key to a simple approach. (Assume you don't care whether the staff acquires a nonfunctional copy of the client app.)
At startup, app checks for existence of a validation of some kind stored in chrome.storage.local. If it exists, startup continues.
If the validation is missing, the app checks for existence of a GUID stored in chrome.storage.local.
If the GUID is missing, generate and store one using something like window.crypto.getRandomValues().
Ask the server for a validation by sending the GUID and getting a response.
If a validation comes back, save it in chrome.storage.local and go back to the start of this sequence.
Otherwise tell the user to get lost.
A full-strength version of this approach would have some additional features:
Use an HMAC(GUID, secret) for the validation. I'm assuming the staff aren't tech superstars, so something simple like a boolean would probably suffice.
Optionally add a per-launch step that sends up the GUID and validation and confirms it's still valid each time.
When the validation is requested, you might prompt for the secret key you mentioned in your question. In normal cases this would be needed only at provisioning time.
In case you haven't figured it out yet, the server is now acting like a simple licensing server, so it's up to you to decide how to decide whether the validation request succeeds. Maybe it allows only N validations to exist at once, or after you're done provisioning you hardcode future validations to fail. Maybe it limits validation requests to certain IP addresses. You get to choose.
That's the gist. It's a simple DRM system that is easier to manage than the enter-secret-at-installation method, but that won't withstand an attack of more than 30 minutes (since a smart attacker will just inject another machine's GUID and HMAC validation into the duplicate machine's chrome.storage.local).

What's the local IP address of the user accessing my website, in Node.js + Express?

I need to know the IP addresses of the users who enter my website. I've already write the following code, but it returns only users WAN IP. That's not enough for me, since I need a full trace of users to prevent them to click twice on the same button (cookies are not an option because most of users use anonymous mode).
app.enable('trust proxy'); // enable trust proxy
I'm using this code to get the req.ips (in theory it has what I want) or just req.ip if the last was empty. The problem is that req.ips is always empty.
req.ips.length ? req.ips : [req.ip]
Any ideas of what can I do now?
you can use this
this.body = this.req.headers['x-real-ip'];
Sounds like you might have (partially) answered your own question, but I wanted to throw some ideas out there for you.
First, you're never going to get the locally-assigned IP address of the computer: that's just not something that the browser is ever going to report. You also probably won't be able to get meaningful identifying "low-level" network protocol information as there are many different approaches to NAT.
The traditional way to solve this, as you mentioned is cookies, but if your users are mostly using anonymous mode, that's not an option, as you say.
Have you considered HTML5 Local Storage? That is accessible in anonymous mode, and you can use it like a session, storing a unique session ID on each client. This means your clients need a relatively modern browser, but hopefully that won't be an issue.
Here's some information about HTML5 local storage:
http://diveintohtml5.info/storage.html
This isn't a slam-dunk, though: the behavior of local storage in anonymous mode is currently debated. There's a good discussion of it here:
http://blog.whatwg.org/tag/localstorage
Another option would be to use a querystring...not a great solution, maybe, and easily foiled by the user (if you're worried about that). When you first land on the page, you can redirect the user to the same page with a querystring containing a unique ID. This approach is burdensome in that you must make sure that querystring is added to every internal link. But it does get around all issues with anonymous browsing.

OpenAM 10 iPlanetDirectoryPro cookie change

Several documents on the ForgeRock site mention to change the iPlanetDirectoryPro cookie name in openAM 10 but never mention which file(s) to change it in. I've tried several including AgentService.xml and AMAuth.xml to no avail. Has anyone does this successfully?
You don't have to change it in files, the files you mentioned are 'OpenAM service descriptions' which are loaded into the configuration store when OpenAM is configured.
Later on you have to change the service attributes using either the console or ssoadm.
You can change the name of the SSO session tracking cookie by changing value in 'server defaults' under 'servers and sites'.
If you have Agents running in normal SSO mode, be sure to adopt the value there as well.

Resources