As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I build a web site which will be deployed and maintained by my company IT dept. My web site backend needs to access a third party API on the internet. The IT say it is not allowed to access external network from this site. Is this an acceptable security restriction? What is the secured way to make an external API call?
The reason why your IT department wants to restrict access to arbitrary external websites is to theoretically make it harder to move any data off your web server to another server in the circumstances where a hacker has managed to upload and execute some arbitrary code.
This is not a totally unreasonable policy to have, as it does help mitigate an attack, even if it doesn't totally block an attack.
The standard way to allows connections to the outside world, but in a controlled manner is for your IT department to setup a proxy, and then your application should make all connections to other websites through that. The proxy should have a white-list of all domains that your code is allowed to connect to, blocking all other requests.
That should allow your software to do what it needs to do, while still mitigating the potential for hackers to be able to move data off the server.
btw if your IT department is capable of it, you should be able to configure the proxy so that any request to a non-whitelisted site will trigger an alarm, as it would indicate a probably intrusion on the server, and it's now running hack uploaded code.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Every time I load a browser tag in Chrome on my mac, the application forks another process. This seems to be different from how firefox or Safari work. What was the reason why Google stayed away from multi-threading in this case? The problem to be solved here (rendering multiple pages at once would seem in my mind to be a prime candidate for muti-threading, or?
Running each page (or tab) in a separate process allows Chrome to provide a bit more security against page rendering bugs, as well as browser plug-ins that run within a process. Basically, if one page crashes, it won't affect other tabs. Instead, you'll get an "Aw Snap!" message.
From the docs:
We use separate processes for browser tabs to protect the overall
application from bugs and glitches in the rendering engine. We also
restrict access from each rendering engine process to others and to
the rest of the system. In some ways, this brings to web browsing the
benefits that memory protection and access control brought to
operating systems.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I am learning Node.js and am currently studying WebSockets. As I understand it, Socket.io was intended to address inconsistent support that the various browsers had with WebSockets...If you check out caniuse WebSockets, it appears that WebSockets currently has practically full support. Can anyone explain why I should use Socket.io versus WebSockets in this case?
It handles graceful degradation for you to numerous technical alternatives to get bi-directional near-time communication flowing (web sockets, ajax long polling, flash, etc)
As of March 2013 that site lists web sockets at 61% support. This is not "practically full".
As of September 2021 that site lists web sockets at 98% support. All modern browser's support Websockets.
It handles browser inconsistencies and varying support levels for you
(these first 2 things are basically the same value created by jQuery, to put it in perspective)
It includes additional features beyond bare bones web sockets such as room support for basic publish/subscribe infrastructure and things like automatic reconnect
AFAIK it is more popular and easier to get help with than vanilla web sockets, at least at the moment.
However, just like there is VanillaJS for the jQuery haters, if you prefer using the official standard web socket APIs directly, by all means, knock yourself out.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Several sites (I remember Yahoo did this too, back when I used my Yahoo account) such as Bank of America show a SiteKey or similar image the user chooses after they enter their username, but before they enter their password. Ostensibly, this ensures the login page is unique to each user, and therefore a phisher can't just show a static login page that looks like the bank's site, but what's stopping them from simply hitting the bank's site in the background and forwarding the image (or other security challenge) right to the user? I'll grant, it makes the phisher's job slightly harder, but it really doesn't seem that valuable to me. What's the rationale for this behavior?
If a single server keeps hitting their site requesting the images for different userids (especially one where the users haven't logged in from before), it will be pretty suspicious, so it's harder for a Phisher to hide.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have read in numerous places that in theory an ssl/443 websocket connection has a high percentage success rate when the client is behind corporate proxy/firewalls. This topic also touches on the issue: Websockets behind enterprises proxies.
Our setup is node.js with websocket-node server side, passing binary data to Chrome 15+ client. No issues with performance as it is blazing fast. I do however have failed connections within corporations, in one example I know they are using an explicit proxy, connecting to that proxy server on port 8080.
The first question is two part: a) what mechanisms can I use to debug the issue to know what is blocking the upgrade, and b) for those with experience what is most likely the culprit?
Secondly, what performance hit will I take if I fall back to flash (i.e. websocket-js)?
Many thanks in advance.
UPDATE: It is now clear that the upgrade handshake request is never being seen by the websocket server, so it is being blocked on the client side before the request ever gets out.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am setting up phplist for a client on my dedicated server. I am going to install a dedicated mail solution on a dedicated IP address for his mailing list. He has an estimated 5,000 subscribers interested in his newsletter right now. The goal is to send out about that many emails per night.
The current obvious choices are qmail and postfix. Can I get some pros and cons to both? Are there any better solutions?
qmail can handle that. DNS will be your main problem for sending into sites like yahoo, godaddy, and gmail for example. Make sure you are hosted on clean IPs:
http://www.mxtoolbox.com/blacklists.aspx
http://www.spamhaus.org/sbl/
If you use qmail this is a good site: http://www.qmailrocks.org/
A newsletter going out every night sounds like spam. Be careful what ISP you host it at, they may suspend your account when they get complaints (there are always complaints about bulk email, even when it is solicited).