This may be a silly question but is client side application insights safe from spoofing? Microsoft ask you to add a bit of JavaScript to your HTML page that needs recording and part of this contains a hard coded instrumentation key (not a real key below!):
instrumentationKey: "3D486E8C-BDEF-43AB-B27A-9D3F9D42EC14"
There doesn't seem to be any other relationship between Url and key or any mechanism to prevent spoofing of this key client side (i.e. randomly generating the key with different numbers and submitting the page).
This wouldn't cause any damage, but it would be annoying to the receiver of the incorrect monitoring data, which may well be all someone wants to do "because they can".
Have I missed something fundamental as to why this is not possible?
This is absolutely correct that anyone can log misleading or garbage data to anyone’s AI account if they know the instrumentation key. This is also correct for most web other analytics systems: the request to log information is sent unauthenticated and anyone with sufficient skill can emulate a valid user data. The fact that AI has instrumentation key embedded on the page does not make it easier because anyone using web traffic monitor tool like Fiddler can still intercept and emulate the requests, even if instrumentation key was not embedded on the page.
If you suspect that a malicious user will purposefully log misleading data using your AI key, you should use caution and validate if the data makes sense before making your business decisions, for example from how many users was the data obtained and over what period of time, and whether your client-side page view data matches the server-side requests data.
While not exactly a duplicate, i believe the answer is pretty much the same as this one:
How does Google Analytics prevent traffic spoofing
AI doesn't know how or where you're using your key, so how would they know which traffic is legitimate and which is not?
+1 to Alex's answer. FYI, this is the official answer from Azure Monitor: My Instrumentation Key is visible in my web page source
TL;DR:
Yes, data can be skewed but not stolen
It is a common practice
To mitigate the impact, you could set up two separate app insights resources: one for client, the other for server
To overcome the issue, you set up a custom API and forward telemetry data to app insights.
Related
We are planning to go for a security testing certificate. For that reason we are using Paros tool to test our system.
The system is written in GWT on front end and database connectivity is happening through Hibernate.
When we use this tool to test our application following behaviour is happening which needs to be restricted.
The tool is able to see the data which is passed to server. This is fine but when we make any changes in the data through tool it gets updated in the system on database end. This is a big security issue.
Can someone guide me in this?
If you're still looking for a solution to this problem, you could use request signing. The reason I didn't mention it earlier was because the only time I had seen request signing, there were certificates involved, and it was mostly using the Web Services Security Standard. The other time I recommended implementation of request signing was for a mobile application - its relatively easier to do there also, since you can use certificates that are on the device to perform the signing, and the server can verify this signature (essentially, a public key encryption mechanism).
As you mention in the comments, there are multiple aspects to it - one is to prevent XSRF, which is essentially including a nonce to ensure that an attacker cannot replay requests, or craft requests that might harm an authenticated user. This nonce will have to come from the server, since anything that you create using Javascript, the attacker can create also. This nonce will make sure that your request is time specific, and that it cannot be replayed at a later point of time.
However, a nonce isn't going to stop attacks where a user is in a hostile network, and an attacker is performing a MitM attack on all traffic. The attacker can still modify a request, and since the server has never seen that nonce before, it will accept the request as valid. To prevent this, you need to countermeasures in place - one, all traffic should go via SSL, and two, all requests must be signed so as to prevent tampering. The signature part is particularly hard, especially if you have to ensure that an attacker cannot perform the same signing. The examples I have seen of it involve certificate level authentication for the webapp, and using these certificates to then perform the signing - which might be too stringent a requirement for the application that you seem to be developing. Other methodologies involve using something that the user has/knows - maybe a token, password, secret answer, etc. - that cannot be replicated by an attacker, and using that information to sign requests.
Here's an example on how you can do this via PHP. I don't know if this mechanism can be adapted to do it for your purposes, though. OAuth might be another possible method, but since I've never seen an application do it that way, I am not very sure.
Sorry I don't have a specific methodology or examples of code for you to look at, but most implementations I've seen are only from a design standpoint, versus an actual code standpoint.
I am architecting a project which uses jQuery to communicate with a single web service hosted inside sharepoint (this point is possibly moot but I include it for background, and to emphasize that session state is not good in a multiple front end environment).
The web services are ASP.Net ASMX services which return a JSON model, which is then mapped to the UI using Knockout. To save the form, the converse is true - JSON model is sent back to the service and persisted.
The client has unveiled a requirement for confidentiality of the data being sent to and from the server:
There is data which should only be sent from the client to the server.
There is data which should only appear in specific views (solvable using ViewModels so not too concerned about this one)
The data should be immune from classic playback attacks.
Short of building proprietary security, what is the best way to secure the web service, and are there any other libraries I should be looking at to assist me - either in JavaScript, or .Net
I'll post this as an answer...for the possibility of points :)...but I don't know that you could easily say there is a "right way" to do this.
The communications sent to the service should of course be https. That limits the man-in-the-middle attack. I always check to see that the sending client's IP is the same as the IP address in the host header. That can be spoofed, but it makes it a bit more annoying :). Also, I timestamp all of my JSON before send on the client, then make sure it's within X seconds on the server. That helps to prevent playback attacks.
Obviously JavaScript is insecure, and you always need to keep that in mind.
Hopefully that gives you a tiny bit of insight. I'm writing a blog post about this pattern I've been using. It could be helpful for you, but it's not done :(. I'll post a link sometime tonight or tomorrow.
I've done a little googling but have been a bit overwhelmed by the amount of information. Until now, I've been considering asking for a valid md5 hash for every API call but I realized that it wouldn't be a difficult task to hijack such a system. Would you guys be kind enough to provide me with a few links that might help me in my search? Thanks.
First, consider OAuth. It's somewhat of a standard for web-based APIs nowadays.
Second, some other potential resources -
A couple of decent blog entries:
http://blog.sonoasystems.com/detail/dont_roll_your_own_api_security_recommendations1/
http://blog.sonoasystems.com/detail/more_api_security_choices_oauth_ssl_saml_and_rolling_your_own/
A previous question:
Good approach for a web API token scheme?
I'd like to add some clarifying information to this question. The "use OAuth" answer is correct, but also loaded (given the spec is quite long and people who aren't familiar with it typically want to kill themselves after seeing it).
I wrote up a story-style tutorial on how to go from no security to HMAC-based security when designing a secure REST API here:
http://www.thebuzzmedia.com/designing-a-secure-rest-api-without-oauth-authentication/
This ends up being basically what is known as "2-legged OAuth"; because OAuth was originally intended to verifying client applications, the flow is 3-parts involving the authenticating service, the user staring at the screen and the service that wants to use the client's credentials.
2-legged OAuth (and what I outline in depth in that article) is intended for service APIs to authenticate between each other. For example, this is the approach Amazon Web Services uses for all their API calls.
The gist is that with any request over HTTP you have to consider the attack vector where some malicious man-in-the-middle is recording and replaying or changing your requests.
For example, you issue a POST to /user/create with name 'bob', well the man-in-the-middle can issue a POST to /user/delete with name 'bob' just to be nasty.
The client and server need some way to trust each other and the only way that can happen is via public/private keys.
You can't just pass the public/private keys back and forth NOR can you simply provide a unique token signed with the private key (which is typically what most people do and think that makes them safe), while that will identify the original request coming from the real client, it still leaves the arguments to the comment open to change.
For example, if I send:
/chargeCC?user=bob&amt=100.00&key=kjDSLKjdasdmiUDSkjh
where the key is my public key signed by my private key only a man-in-the-middle can intercept this call, and re-submit it to the server with an "amt" value of "10000.00" instead.
The key is that you have to include ALL the parameters you send in the hash calculation, so when the server gets it, it re-vets all the values by recalculating the same hash on its side.
REMINDER: Only the client and server know the private key.
This style of verification is called an "HMAC"; it is a checksum verifying the contents of the request.
Because hash generation is SO touchy and must be done EXACTLY the same on both the client and server in order to get the same hash, there are super-strict rules on exactly how all the values should be combined.
For example, these two lines provides VERY different hashes when you try and sign them with SHA-1:
/chargeCC&user=bob&amt=100
/chargeCC&amt=100&user=bob
A lot of the OAuth spec is spent describing that exact method of combination in excruciating detail, using terminology like "natural byte ordering" and other non-human-readable garbage.
It is important though, because if you get that combination of values wrong, the client and server cannot correctly vet each other's requests.
You also can't take shortcuts and just concatonate everything into a huge String, Amazon tried this with AWS Signature Version 1 and it turned out wrong.
I hope all of that helps, feel free to ask questions if you are stuck.
I'm building a system that need to collect some user sensitive data via secured web connection, store it securely on the server for later automated decryption and reuse. System should also allow user to view some part of the secured data (e.g., *****ze) and/or change it completely via web. System should provide reasonable level of security.
I was thinking of the following infrastructure:
App (Web) Server 1
Web server with proper TLS support
for secured web connections.
Use public-key algorithm (e.g. RSA) to
encrypt entered user sensitive data
and send it to App Server 2 via
one-way outbound secured channel
(e.g. ssh-2) without storing it
anywhere on either App Server 1 or DB
Server 1.
Use user-password-dependent
symmetric-key algorithm to encrypt
some part of the entered data (e.g.
last few letters/digits) and store
it on the DB Server 1 for later
retrieval by App Server 1 during
user web session.
Re-use step 2 for data modification by user via web.
DB Server 1
Store unsecured non-sensitive user
data.
Store some part of the sensitive
user data encrypted on App Server 1
(see step 3 above)
App Server 2
Do NOT EVER send anything
TO App Server 1 or DB Server 1.
Receive encrypted user sensitive
data from App Server 1 and store it
in DB Server 2.
Retrieve encrypted
user sensitive data from DB Server 2
according to the local schedules,
decrypt it using private key
(see App Server 1, step 2) stored
locally on App Server 2 with proper key management.
DB Server 2
Store encrypted user sensitive data (see App Server 2, step 2)
If either App (Web) Server 1 or DB Server 1 or both are compromised then attacker will not be able to get any user sensitive data (either encrypted or not). All attacker will have is access to public-key and encryption algorithms which are well known anyway. Attacker will however be able to modify web-server to get currently logged users passwords in plaintext and decrypt part of user sensitive data stored in DB Server 1 (see App Server 1, step 3) which I don't consider as a big deal. Attacker will be able to (via code modification) also intercept user sensitive data entered by users via web during potential attack. Later I consider as a higher risk, but provided that it is hard (is it?) for attacker to modify code without someone noticing I guess I shouldn't worry much about it.
If App Server 2 and private key are compromised then attacker will have access to everything, but App Server 2 or DB Server 2 are not web facing so it shouldn't be a problem.
How secure is this architecture? Is my understanding of how encryption algorithms and secured protocols work correct?
Thank you!
I don't think I can give a proper response because I'm not sure the goal of your system is clear. While I appreciate you getting feedback on a design, it's a bit hard without a purpose.
I would suggest to you this though:
Strongly document and analyse your threat model first
You need to come up with a fixed hard-lined list of all possible attack scenarios. Local attackers, etc, who are you trying to protect against? You also say things like 'with proper key management'; yet this is one of the hardest things to do. So don't just assume you can get this right; fully plan out how you will do this, with specific linking to who it will prevent attacks by.
The reason you need to do a threat model, is that you will need to determine on what angles you will be vulnerable; because this will be the case.
I will also suggest that while the theory is good; in crypto implementation is also very critical. Do not just assume that you will do things correctly, you really need to take care as to where random numbers come from, and other such things.
I know this is a bit vague, but I do think that at least coming up with formal and strong threat model, will be very helpful for you.
So far so good. You are well on your way to a very secure architecture. There are other concerns, such as firewalls, password policies, logging, monitoring and alerting to consider, but everything you described so far is very solid. If the data is sensitive enough, consider a third party audit of your security.
I would not recommend using any form of public key to communicate from your web server to your app server. If you control both system just a regular secret system of encryption. You know the identity of your app server, so keeping the key secure is not an issue. If you ever need to change or update the secret key just do so manually to prevent it from leaking across a connection.
What I would be most careful about is direction of data transfer from your server in your DMZ, which should only be your webserver, to those boxes residing internally to your network. It is becoming increasingly common for legitimate domains to be compromised to distribute malware to visiting users. That is bad, but if the malware were to turn in ward to your network instead of only outward to your users then your business would be completely hosed.
I also did not see anything about preventing sql injection or system hardening/patching to prevent malware distribution. This should be your first and most important consideration. If security were important to you then you would be your architecture to be flexible to minor customizations of inter-server communication and frequent patching. Most websites, even major legitimate businesses, never fix their security holes even if they are compromised. You must be continually fixing security holes and changing things to prevent holes from arise if you wish to avoid being compromised in the first place.
To prevent becoming a malware distributor I would suggest making hard and fast rules upon how media is served that contains any sort of client-side scripting. Client-side scripting can be found in JavaScript, ActiveX, Flash, Acrobat, Silverlight, and other code or plugin that executes on the client system. Policies for serving that content must exist so that anomolous code fragments can be immediately identified. My recommendation is to NEVER embed client-side code directly into a browser, but always reference some external file. I would also suggest conslidating like minded media to give you better asset control and save you bandwidth, such as serving one large JavaScript file instead of 8 small ones. I would also recommend forcing all such media onto an external content distribution system that references your domain in its directory structure. That way media is not served from your servers directly and if it served from you directly you can quickly identify it as potentially malicious and necessittating a security review.
I want to create a portal website for log-in, news and user management. And another web site for a web app that the portal redirects to after login.
One of my goals is to be able to host the portal and web-app on different servers. The portal would transmit the user's id to the web-app, once the user had successfully logged in and been redirected to the web app. But I don't want people to be able to just bypass the login, or access other users accounts, by transmitting user ids straight to the web app.
My first thought is to transmit the user id encrypted as a post variable or query string value. Using some kind of public/private key scenario, and adding a DateTime stamp to key to make it vary everytime.
But I haven't done this kind of thing before, so I'm wondering if there aren't better ways to do this.
(I could potentially communicate via database, by having the portal store the user id with a key in a database and passing that key to the web app which uses it to get the user id from that database. But that seems crazy.)
Can anyone give a way to do this or advice? Or is this a bad idea all-together?
Thanks for your time.
Basically, you are asking for a single-sign-on solution. What you describe sounds a lot like SAML, although SAML is a bit more advanced ;-)
It depends on how secure you want this entire thing to be. Generating an encrypted token with embedded timestamp still leaves you open to spoofing - if somebody steals the token (i.e. through a network sniffing) he will be able to submit his own request with the stolen token. Depending on the time to live you will give your token this time can be limited, but a determined hacker will be able to do this. Besides you cannot make time to live to small - you will be rejecting valid requests.
Another approach is to generate "use once" tokens. This is 'bullet proof' in terms of spoofing, but it requires coordination among all the servers within the server farm servicing your app, so that if one of them processed the token the other ones would reject it.
To make it really secure for the failover scenarios, etc. it would require some additional steps, so it all boils down to how secure you need it to be and how much you want to invest in building it up
I suggest looking at SAML
PGP would work but it might get slow on a high-traffic site
One thing I've done in the past is used a shared secret method. Some token that only myself and the other website operator knows concatenated to something identifying the user (like their user name), then hash that with a checksum algorithm such as SHA256 (you can use MD5 or SHA1 which usually are more available but they are much easier to break)
The other end should do the same thing as above. Take the passed identifying information and checksum it. Compare that to the passed checksum, if they match the login is valid.
For added security you could also concat the date or some other rotating key. Helps to run SSL on both sides as well.
In general, the answer resides somewhere in SHA256 / MD5 / SHA1 plus shared secret based on human actually has to think. If there is money somewhere, we may assume there are no limits to what some persons will do - I ran with [ a person ] in High School for a few months to observe what those ilks will do in practice. After a few months, I learned not to be running with those kind. Tediously avoiding work, suddenly at 4 AM on Saturday Morning the level of effort and analytical functioning could only be described as "Expertise" ( note capitalization ) There has to be a solution else sites like Google and this one would not stand the chance of a dandelion in lightning bolt.
There is a study in the mathematical works of cryptography whereby an institution ( with reputable goals ) can issue information - digital cash - that can exist on the open wire but does not reveal any information. Who would break them? My experience with [ person ]
shows that it is a study in socialization, depends on who you want to run with. What's the defense against sniffers if the code is already available more easily just using a browser?
<form type="hidden" value="myreallysecretid">
vis a vis
<form type="hidden" value="weoi938389wiwdfu0789we394">
So which one is valuable against attack? Neither, if someone wants to snag some Snake Oil from you, maybe you get the 2:59 am phone call that begins: "I'm an investor, we sunk thousands into your website. I just got a call from our security pro ....." all you can do to prepare for that moment is use established, known tools like SHA - of which the 256 variety is the acknowledged "next thing" - and have trace controls such that the security pro can put in on insurance and bonding.
Let alone trying to find one who knows how those tools work, their first line of defense is not talking to you ... then they have their own literature - they will want you to use their tools.
Then you don't get to code anything.