I am using Jboss application server.
I have implemented the whole website on ssl (https), the site is working fine on internet explorer browser but the site displays the below information on Mozilla/Konqueror browser but only on a particular page.
Security Warning :
Although this page is encrypted, the information you hav enetered is to be sent over an unencrypted connection and could easily be read by a third party.
Are you sure you want to continue ?
continue cancel
Is it a Jboss feature ?
If the whole site is running on https then why only n a particular page this information dislays ?
What should I do to get rid of this problem ?
Please do help me !!!!!!!!! My mailid is [redacted]
Thanks and regards,
AKhtar Bhat
Check for images or other resources that are being requested from non-SSL locations. This is usually the problem.
Posting your E-mail address here is probably not the best idea, it is likely to be harvested by spammers (although Gmail does have a fairly effective spam filter). It is also extremely unlikely anyone is going to answer you in E-mail since that would defeat the purpose of Stack Overflow, which is probably why your question was voted down.
To resolve your problem, you must find the page that is presenting the dialog or message you are seeing, then view the source of the webpage. Ensure the action attribute of any <form> tags are either relative to the current server (i.e. - action="/some/path/...") or are absolute but are directed to https (i.e. - action="https://some_server/some/path/...").
If you are using any AJAX calls, you must also ensure they are using https.
It seems unlikely the message is a result of resources being sent to you insecurely. It seems much more likely the message is a result of a <form> tag with an incorrect action attribute or an insecure AJAX call.
Check the FORM tag for ACTION property.
The action should be an HTTPS URL (which it should be if relative but may not be if absolute.)
I think you'll find some information on ths page that should help you
http://blog.httpwatch.com/2009/04/23/fixing-the-ie-8-warning-do-you-want-to-view-only-the-webpage-content-that-was-delivered-securely/
Related
I am using the Twebbrowser in Delphi (2009) to log into cpanel on my ISP and add a new remote host IP address for a MySQL database. The user name and password are filled in by code as is clicking the submit button, using code gleaned from several places here.
Navigating directly to the hosts page causes the cpanel login page to be shown first. My program detects this and logs me in.
This uses a line like
WebBrowser1.Navigate'https://thedomain.sgcpanel.com:2083/cpsess1819495779/frontend/Crystal/sql/managehost.html');
which reaches the hosts page OK but I notice that the security token (cpsess1819495779) is changed to something else each time, presumably being supplied by the login page.
However if I try to login first as a separate operation and then navigate to the hosts page using
WebBrowser1.Navigate('https://thedomain.sgcpanel.com:2083');
followed by
WebBrowser1.Navigate'https://thedomain.sgcpanel.com:2083/cpsess1819495779/frontend/Crystal/sql/managehost.html');
I get a server message saying the url for the hosts page has an invalid security token - presumably the cpsess1819495779 bit
Question
How can I use Twebbrowser to get hold of the security token generated by the login page in order to use it to build the correct url for the hosts page so that I pass the correct security token each time.
It's probably something to do with cookies etc but I don't know how to deal with those (yet)
BTW as the Twebbrowser is not visible I did spend quite a few days trying to do the same thing using Indy's TIdHTTP but have given up with that as am getting too many errors I can't sort out.
I may as well answer this myself to close the question and maybe avoid any more down votes for posting a question after extensive research failed to produce the answer and that was framed without much of my code for brevity.
The API documentation for cpanel (the application used by many ISPs to manage MySQL, email etc) is here: https://documentation.cpanel.net/display/SDK/Guide+to+cPanel+API+2
part of that says
Security token After you log in to your server, it automatically appends a security token to the URL for your session. Security tokens
help prevent authorized use of a website through XSRF (Cross-Site
Request Forgery). Security tokens contain the string cpsess and a
10-digit number.
Logging in manually in IE / Chrome etc. does indeed show the token eg cpsess1819495779 inserted into the original url that was navigated to. So if I navigate to
https://thedomain.sgcpanel.com:2083/cpsess0000000000/frontend/Crystal/sql/managehost.html
(to logon to cpanel), the part of the url displayed in the browser after cpsess gets changed to something like this, where the number changes each time.
https://secureukm11.sgcpanel.com:2083/cpsess1819495779/frontend/Crystal/sql/managehost.html
However, using Twebbrowser to show that modified url using
memo1.Lines.Add(WebBrowser1.LocationURL);
or
ShowMessage('URL: ' + Webbrowser1.OleObject.Document.Url);
simply shows the original url with the zeros, not the real security token.
So the answer to my question seems to be it can't be done in Twebbrowser as the url is only changed at the server and and the security token is not transmitted back to the browser.
I am working on a financial web application.
There is a client requirement that if user is logged in and already browsing the app. If he copies and pastes the browser url to another window. In another window, the user should get logged out.
I know http is stateless and there is no inbuilt browser mechanism (cookies etc) to solve it, this needs to be implemented by programming only. I guess people have already solved this problem. Do you know know possible solution to solve this issue?
Sadly, there is no solution.
The browser keeps the cookies and all of the user informations for all the Tabs & Windows you open. It will clear the datas (like cookies that ask to be removed after the session) as soon as you close ALL tabs and windows of your browser. Note that if the user use another browser, the behaviour your want will be respected — browsers dnn't (yet ?) share this kind of informations.
It is simply not possible to solve the problem with code, and you'll have to find work-around.
As a researcher, I've seen one of these solutions : de-auth the user on the HTTP_REFERER (Apache Env. Variable). As soon as the referer was not the application itself (except for the login form), the user was de-authed. But take care of it : the Referer is an info sent by the browser. And no information sent by the browser should be trusted :). The advice remains, if only you want to use Javascript. You'll find someone to use a JS-disabled-browser to bypass your verification.
That's why Application Development is not yet dead ;)
Cheers.
K.
I have a page that contains sensitive information that I would like to require reauthentication in order to load. I am using Classic authentication mode, not forms.
The first method i looked at was the PrincipalContext.ValidateCredentials method, but that would require sending login details in plain text (i think).
I have thought about using javascript to turn off cookies so they would have to log back in, but I haven't thought of a way of doing this well.
Has anyone done this before with SharePoint?
what i ended up with:
a web part on the page with sensitive material which forces an HTTP 401, and then redirects to another page.
this other page has a second web part, which then redirects back to the original page after setting some session variable.
You could use something along the lines of this if you're using IE6/8 but other browser may have issues with it (look into http-keep-alives).
<script type='text/javascript'>
document.execCommand("ClearAuthenticationCache");
</script>
That said, it doesn't seem like friendliest UI option to forcibly clear someones authentication. I suspect a better option would depend on the audience and if they are on a trusted domain or coming from an external source. If they are on the trusted domain and don't normally login anyway, this approach likely wont please them much.
I have a url to a search page (e.g. http://x.y.z/search?initialQuery=test). It isn't a webservice endpoint, its just a basic url (which goes through a Spring controller). There is no security around accessing page, you can enter the link in a browser and it will render results.
What I want is to find a way to prevent other sites from submitting requests to this url, unless they are specifically allowed.
I build a filter which would intercept all request to this page, and perform some validation. If validation failed then they would be redirected to another page.
The problem is what validation to perform... I tried using the referer field to see if the request was coming from an "allowed" site but I know the referer field isn't always populated and can easily be faked.
Is there a way to achieve this?
We also have IHS so if there is something that can be done in there either that would be great.
I'd suggest implementing some kind of system to allow users to log in if you really want to protect a page from being accessed.
You could try to detect the IP address of the incoming request, but I'd imaging that this can be spoofed quite easily.
Realistically, pages that are public are open to any kind of interrogation to the limits that you set. Perhaps limiting the data that the page returns is a more practical option?
This is the reason that website's like Facebook and Twitter implement oAuth to prevent resources from being accessed by unauthorised users.
How about you only run the result if the referring page has passed along a POST variable called "token" or something which has been set to a value that you give each app that's going to hit the search page. If you get a request for that page with a query string, but not POST value for "token", then you know its an unauthorized request and can handle it accordingly.
If you know the IPs of sites which can contact your service, you can put Apache as a proxy and use access control to permit/deny access to specific directories/urls.
I assume that you want to avoid having your site "scraped" by bots, but do want to allow humans to access your search page.
This is a fairly common requirement (google "anti scraping"). In ascending order of robustness (but descending order of user-friendliness):
block requests based on the HTTP headers (IP address, user agent, referrer).
implement some kind of CAPTCHA system
require users to log in before accessing the search URL
You may be able to buy some off-the-shelf wizardry that (claims to) do it all for you, but if your data is valuable enough, those who want it will hire mechanical turks to get it...
make a certification security.
u can make self signed certification using openssl or java keytool
and u will have to send a copy of certificate to ur client.
If this client will not have this certificate, It will not be able to call ur service.
And to make certificate enable in ur web container.I dont know bout other containers but
in Apache tomcat, u can do it in connector tag of ur server.xml
I've noticed that some email services (like gmail or my school's webmail) will redirect links (or used to) in the email body. So when I put "www.google.com" in the body of my email, and I check that email in gmail or something, the link says something like "gmail.com/redirect?www.google.com".
This was very confusing for me and the people I emailed (like my parents, who are not familiar with computers). I always clicked on the link anyway, but why is this service used? (I'm also worried that maybe my information was being sent somewhere... Do I have anything to worry about? Is something being stored before the redirect?)
Sorry if this is unwarranted paranoia. I am just curious about why some things work the way they do.
Wikipedia has a good article on URL redirection. From the article:
Logging outgoing links
The access logs
of most web servers keep detailed
information about where visitors came
from and how they browsed the hosted
site. They do not, however, log which
links visitors left by. This is
because the visitor's browser has no
need to communicate with the original
server when the visitor clicks on an
outgoing link. This information can be
captured in several ways. One way
involves URL redirection. Instead of
sending the visitor straight to the
other site, links on the site can
direct to a URL on the original
website's domain that automatically
redirects to the real target. This
technique bears the downside of the
delay caused by the additional request
to the original website's server. As
this added request will leave a trace
in the server log, revealing exactly
which link was followed, it can also
be a privacy issue.1 The same
technique is also used by some
corporate websites to implement a
statement that the subsequent content
is at another site, and therefore not
necessarily affiliated with the
corporation. In such scenarios,
displaying the warning causes an
additional delay.
So, yes, Google (and Facebook and Twitter do this to) are logging where your services are taking you. This is important for a variety of reasons - it lets them know how their service is being used, shows trends in data, allows links to be monetized, etc.
As far as your concerns, my personal opinion is that, if you're on the internet, you're being tracked. All the time. If this is concerning to you, I would recommend communicating differently. However, for the most part, I think it's not worth worrying about.
This redirection is a dereferrer to avoid disclosure of the URL in the HTTP Referer field to third party sites as that URL can contain sensitive data like a session ID.