Tips for securing our "public" code repository server - security

I'm working at an IT company where we have used Perforce for years as our code repository system, in our internal company network. Because we are starting to work with an offsite company we are looking into ways of making our Perforce server accessible via the internet.
The most obvious way for me to do this is setup a VPN server on our linux gateway server and allow access through that. Obviously this works but seems very unsafe. If a VPN key of a certain user falls in the wrong hands they can access our code-repository AND our complete internal network.
My first thought was to create a Perforce proxy server (they supply software for this) and host this behind another gateway, with a VPN server. This shields the real Perforce server and our network better. The obvious problem here is that the proxy needs access to our perforce server, meaning the two networks needs to be connected anyway.
Our company is rather small, so taking into account we don't have a huge resource pool to spend on this, how would you approach this?
thanks a lot in advance,
Fred.

Instead of modifying the behavior and configuration of your current internal server, perhaps you should set up a second Perforce server, and use that server only for the interactions between your team and your offsite partner.
Include explicit steps in your workflow to periodically "publish" code from your internal server to your external server, and similar steps to periodically "consume" the offsite company's work by copying their changes back to your internal server and re-submitting them there.
Additionally, there are companies which offer a hosted Perforce service, so you don't even have to operate this external Perforce server yourself; you can let the hosting company manage the operational aspects of this code-sharing server.

One option is to put the official Perforce server in an outside locked-down server. That is to say, everyone accesses Perforce at the outside server, which has Perforce and only Perforce on it.

Related

Is ngrok safe to use or can it be compromised?

Is ngrok a safe tool to use? I was reading a tutorial which recommended to use ngrok test API responses that I make to outside services that need to connect to my endpoints also.
There is no source code available for Version 2.0, considering it started as an open source project in 2014. I am suspect of any code that opens a tunnel to my localhost from the cloud. Pretty scary stuff especially without source code!
It opens up a tunnel to your dev machine, which is partially secured by obscurity (a hard to guess subdomain), and can be further secured by requiring a password. But you're still opening yourself up to ngrok itself, and the company is completely opaque (no address, no employees, no business name, no LinkedIn presence; all I can find is that it has 1-10 employees and is private; not even sure what country its based in). On top of that the code is not open-sourced. No reason to think they're not legit, but not a lot of information available to build trust.
You may be able to use ngrok and other local tunnel services with more security by encrypting the traffic. See https://security.stackexchange.com/questions/177280/end-to-end-encryption-for-localtunnel-ngrok-setup/177357#177357 for more information.
I found good rating, but vacuous information here:
http://www.scamadviser.com/is-ngrok.com-a-fake-site.html
The kicker for me is
https://developer.atlassian.com/blog/2015/05/secure-localhost-tunnels-with-ngrok/
where the Atlassian folks recommend it highly.
I think I am going to use it.
If anyone is concerning compromising their development environment, you can use Docker. There are many ngrok/docker projects but here is the one I chose: https://github.com/gtriggiano/ngrok-tunnel
for macOS, use "TARGET_HOST=docker.for.mac.localhost"
They now offer a service where you locally run only ssh, no need to run any of their code on your machine.
You run something like ssh -R 80:localhost:8501 tunnel.us.ngrok.com http. This connects to one of their hosts and forwards connections they receive back to your machine and the service you run on localhost:8501.
This seems secure to me, the only thing is that you don't know what information they collect and who is connecting to your exposed service. They print all connections, but it's their binary that does this and someone might well listen in without you noticing. You can check connections on your end, but you cannot be sure who it is that connects.
Ngrok is a convenient and highly secure utility for creating tunnels to locally hosted applications via a reverse proxy. This is a utility for publishing locally hosted applications on the web. style="letter-spacing: 0px;">Simply put, any locally hosted application provides a publicly accessible web URL to the . H. Either a Spring Boot or Nodejs based web application, or a webhook for a chat application, etc.

Dynamic dns entry discovery

How are dynamic dns entries discovered by hackers and what tools are they using to glean this information?
A few days ago I signed up at no-ip.org for a free dns entry in order to open up my e-commerce site to a third party that needs to make calls to it in my development environment. Within a day I saw ip addresses coming to my site that are NOT from this third party. I’m wondering how this brand new dns entry was discovered and so quickly. At least one of these persons was attempting to hack the site and knew exactly the base product I was working with, an open source e-commerce system, and attempted to gain access to the admin area which has got me curious on how exactly these hackers are able to pull this information so quickly and know exactly the product I’m working with.
For now I’ve white-listed the ip addresses from this third party but I’d like to use the same logic these hackers are to look at my site from a security standpoint and better protect against it when we go to production.
To be alerted to new IPs listed in a nameserver requires privileged access to the zone files on the server, regardless of whether those IPs are entered through manual edits to the zone files or through an automated process like DDNS. A quick check shows that those rights aren't enabled by default through the standard mechanism at no-ip.
> server nf5.no-ip.com
Default Server: nf5.no-ip.com
Address: 83.222.240.75
> ls no-ip.com
[nf5.no-ip.com]
*** Can't list domain no-ip.com: Server failed
The DNS server refused to transfer the zone no-ip.com to your computer. If this
is incorrect, check the zone transfer security settings for no-ip.com on the DNS
server at IP address 83.222.240.75.
They do enable zone-transfers by-request, and I suppose that would be a nice thing for a hacker monitor. Fresh servers have the easiest vulnerabilities.
But honestly, it's just as likely that it was a random IP hit, as Marc suggested. To get your product info also isn't hard. After cataloging the server as a new device, it's typically easy to identify the service platform. Just establishing a TCP/IP connection to the server will typically reveal the operating system it runs through subtle tells in number sequences in IP packets and other tidbits of information. It can look deceptively like someone knew all about your server upon first connection.

Node.js online app with intranet DB

What I want to build is an application that sits online and it's used by different groups that each have their own intranet. Now, because of a stupid security policy the data can't sit outside the intranet. How would you go about building an app that it's still online, so you can push updates to everyone at once, but has a DB on each intranet's server?
My initial plan is to use Node.js and MongoDB.
If your db really has to be onsite, and your app really has to be offsite, then probably your only option is to set up a secure connection from your app, to the onsite db, and pretend as if its hosted locally to the app. This may/may not violate the security policies. You could in theory lock it down pretty well, with a vpn between the two networks. But this is not for the faint of heart, performance will suffer, and it does have security issues. It also means a bit of work for every site.
If the only reason you're wanting it to be "online" is for pushing updates as you stated, then you'll do better installing the app on-premise, and getting it to poll into a central server for notifications about new versions, download updates to itself, and install them automatically. Once you've created this, a new installation requires no new work.
I'm facing a similar problem right now, so here's my take on it, not really mongo or node specifc.
Put a db and a simple restful server on each of client's intranets. Servers can be exactly the same.
Put a routing facade accesible from internet that redirects requests to apropriate server based on url, ie: http://facade/server/resource becomes a request to http://server/resouce.
Configure the facade so that a requests to http://facade/resource go to each server, retrieve results and return all of them in some aggregated form.
Obviously there are more details to take into account, like permissions (can everybody publish to each server? if not, who can?), but the general idea is there.

Accessing data in internal production databases from a web server in DMZ

I'm working on an external web site (in DMZ) that needs to get data from our internal production database.
All of the designs that I have come up with are rejected because the network department will not allow a connection of any sort (WCF, Oracle, etc.) to come inside from the DMZ.
The suggestions that have come from the networking side generally fall under two categories -
1) Export the required data to a server in the DMZ and export modified/inserted records eventually somehow, or
2) Poll from inside, continually asking a service in the DMZ whether it has any requests that need serviced.
I'm averse to suggestion 1 because I don't like the idea of a database sitting in the DMZ. Option 2 seems like a ridiculous amount of extra complication for the nature of what's being done.
Are these the only legitimate solutions? Is there an obvious solution I'm missing? Is the "No connections in from DMZ" decree practical?
Edit: One line I'm constantly hearing is that "no large company allows a web site to connect inside to get live production data. That's why they send confirmation emails". Is that really how it works?
I'm sorry, but your networking department are on crack or something like that - they clearly do not understand what the purpose of a DMZ is. To summarize - there are three "areas" - the big, bad outside world, your pure and virginal inside world, and the well known, trusted, safe DMZ.
The rules are:
Connections from outside can only get to hosts in the DMZ, and on specific ports (80, 443, etc);
Connections from the outside to the inside are blocked absolutely;
Connections from the inside to either the DMZ or the outside are fine and dandy;
Only hosts in the DMZ may establish connections to the inside, and again, only on well known and permitted ports.
Point four is the one they haven't grasped - the "no connections from the DMZ" policy is misguided.
Ask them "How does our email system work then?" I assume you have a corporate mail server, maybe exchange, and individuals have clients that connect to it. Ask them to explain how your corporate email, with access to internet email, works and is compliant with their policy.
Sorry, it doesn't really give you an answer.
I am a security architect at a fortune 50 financial firm. We had these same conversations. I don't agree with your network group. I understand their angst, and I understand that they would like a better solution but most places don't opt with the better choices (due to ignorance on their part [ie the network guys not you]).
Two options if they are hard set on this:
You can use a SQL proxy solution like greensql (I don't work for them, just know of them) they are just greensql dot com.
The approach they refer to that most "Large orgs" use is a tiered web model. Where you have a front end web server (accessed by the public at large), a mid-tier (application or services layer where the actual processes occurs), and a database tier. The mid-tier is the only thing that can talk to the database tier. In my opinion this model is optimal for most large orgs. BUT that being said, most large orgs will run into either a vendor provided product that does not support a middle tier, they developed without a middle tier and the transition requires development resources they dont have to spare to develop the mid-tier web services, or plain outright there is no priorty at some companies to go that route.
Its a gray area, no solid right or wrong in that regard, so if they are speaking in finality terms then they are clearly wrong. I applaud their zeal, as a security professional I understand where they are coming from. BUT, we have to enable to business to function securely. Thats the challange and the gauntlet I always try and throw down to myself. how can I deliver what my customer (my developers, my admins, my dbas, business users) what they want (within reason, and if I tell someone no I always try to offer an alternative that meets most of their needs).
Honestly it should be an open conversation. Here's where I think you can get some room, ask them to threat model the risk they are looking to mitigate. Ask them to offer alternative solutions that enable your web apps to function. If they are saying they cant talk, then put the onus on them to provide a solution. If they can't then you default to it working. Site that you open connections from the dmz to the db ONLY for the approved ports. Let them know that DMZ is for offering external services. External services are not good without internal data for anything more than potentially file transfer solutions.
Just my two cents, hope this comment helps. And try to be easy on my security brethren. We have some less experienced misguided in our flock that cling to some old ways of doing things. As the world evolves the threat evolves and so does our approach to mitigation.
Why don't you replicate your database servers? You can ensure that the connection is from the internal servers to the external servers and not the other way.
One way is to use the ms sync framework - you can build a simple windows service that can synchronize changes from internal database to your external database (which can reside on a separate db server) and then use that in your public facing website. Advantage is, your sync logic can filter out sensitive data and keep only things that are really necessary. And since the entire control of data will be in your internal servers (PUSH data out instead of pull) I dont think IT will have an issue with that.
The connection formed is never in - it is out - which means no ports need to be opened.
I'm mostly with Ken Ray on this; however, there appears to be some missing information. Let's see if I get this right:
You have a web application.
Part of that web application needs to display data from a different production server (not the one that normally backs your site).
The data you want/need is handled by a completely different application internally.
This data is critical to the normal flow of your business and only a limited set needs to be available to the outside world.
If I'm on track, then I would have to say that I agree with your IT department and I wouldn't let you directly access that server either.
Just take option 1. Have the production server export the data you need to a commonly accessible drop location. Have the other db server (one in the DMZ) pick up the data and import it on a regular basis. Finally, have your web app ONLY talk to the db server in the dmz.
Given how a lot of people build sites these days I would also be loath to just open a sql port from the dmz to the web server in question. Quite frankly I could be convinced to open the connection if I was assured that 1. you only used stored procs to access the data you need; 2. the account information used to access the database was encrypted and completely restricted to only running those procs; 3. those procs had zero dynamic sql and were limited to selects; 4. your code was built right.
A regular IT person would probably not be qualified to answer all of those questions. And if this database was from a third party, I would bet you might loose support if you were to start accessing it from outside it's normal application.
Before talking about your particular problem I want to deal with the Update that you provided.
I haven't worked for a "large" company - though large is hard to judge without a context, but I have built my share of web applications for the non profit and university department that I used to work for. In both situations I have always connected to the production DB that is on the internal Network from the Web server on the DMZ. I am pretty sure many large companies do this too; think for example of how Sharepoint's architecture is setup - back-end indexing, database, etc. servers, which are connected to by front facing web servers located in the DMZ.
Also the practice of sending confirmation e-mails, which I believe you are referring to confirmations when you register for a site don't usually deal with security. They are more a method to verify that a user has entered a valid e-mail address.
Now with that out of the way, let us look at your problem. Unfortunately, other than the two solutions you presented, I can't think of any other way to do this. Though some things that you might want to think about:
Solutions 1:
Depending on the sensitivity of the data that you need to work with, extracting it onto a server on the DMZ - whether using a service or some sort of automatic synchronization software - goes against basic security common sense. What you have done is move the data from a server behind a firewall to one that is in front of it. They might as well just let you get to the internal db server from the DMZ.
Solution 2:
I am no networking expert, so please correct me if I am wrong, but a polling mechanism still requires some sort of communication back from the web server to inform the database server that it needs some data back, which means a port needs to be open, and again you might as well tell them to let you get to the internal database without the hassle, because you haven't really added any additional security with this method.
So, I hope that this helps in at least providing you with some arguments to allow you to access the data directly. To me it seems like there are many misconceptions in your network department over how a secure database backed web application architecture should look like.
Here's what you could do... it's a bit of a stretch, but it should work...
Write a service that sits on the server in the DMZ. It will listen on three ports, A, B, and C (pick whatever port numbers make sense). I'll call this the DMZ tunnel app.
Write another service that lives anywhere on the internal network. It will connect to the DMZ tunnel app on port B. Once this connection is established, the DMZ tunnel app no longer needs to listen on port B. This is the "control connection".
When something connects to port A of the DMZ tunnel app, it will send a request over the control connection for a new DB/whatever connection. The internal tunnel app will respond by connecting to the internal resource. Once this connection is established, it will connect back to the DMZ tunnel app on port C.
After possibly verifying some tokens (this part is up to you) the DMZ tunnel app will then forward data back and forth between the connections it received on port A and C. You will effectively have a transparent TCP proxy created from two services running in the DMZ and on the internal network.
And, for the best part, once this is done you can explain what you did to your IT department and watch their faces as they realize that you did not violate the letter of their security policy, but you are still being productive. I tell you, they will hate that.
If all development solutions cannot be applied because of system engineering restriction in DMZ then give them the ball.
Put your website in intranet, and tell them 'Now I need inbound HTTP:80 or HTTPS:443 connections to that applications. Set up what you want : reverse proxies, ISA Server, protocols break, SSL... I will adapt my application if necessary.'
About ISA, I guess they got one if you have exchange with external connections.
Lot of companies are choosing this solution when a resource need to be shared between intranet and public.
Setting up a specific and intranet network with high security rules is the best way to make the administration, integration and deployment easier. What is easier is well known, what is known is masterized : less security breach.
More and more system enginers (like mines) prefer to maintain an intranet network with small 'security breach' like HTTP than to open other protocols and ports.
By the way, if they knew WCF services, they would have accepted this solution. This is the most secure solution if well designed.
Personnaly, I use this two methods : TCP(HTTP or not) Services and ISA Server.

How to simulate browsing from various locations?

I want to check a particular website from various locations. For example, I see a site example.com from the US and it works fine. The colleague in Europe says he cannot see the site (gets a dns eror).
Is there any way I can check that for my self instead of asking him every time?
This is a bit of self promotion, but I built a tool to do just this that you might find useful, called GeoPeeker.
It remotely accesses a site from servers spread around the world, renders the page with webkit and sends back an image. It will also report the IP address and DNS information of the site as it appears from that location.
There are no ads, and it's very stream-lined to serve this one purpose. It's still in development, and feedback is welcome. Here's hoping somebody besides myself finds it useful!
Sometimes a website doesn't work on my PC and I want to know if it's the website or a problem local to me(e.g. my ISP, my router, etc).
The simplest way to check a website and avoid using your local network resources(and thus avoid any problems caused by them) is using a web proxy such as Proxy.org.
Well, DNS should be the same worldwide, wouldn't it? Of course it can take up to a day or so until your new DNS record is propagated around the world. So either something is wrong on your colleague's end or the DNS record still takes some time...
I usually use online DNS lookup tools for that, e.g. http://network-tools.com/
It can check your HTTP header as well. Only a proxy located in Europe would be better.
Besides using multiple proxies or proxy-networks, you might want to try the planet-lab. (And probably there are other similar institutions around).
The social solution would be to post a question on some board that you are searching for volunteers that proxy your requests. (They only have to allow for one destination in their proxy config thus the danger of becoming spam-whores is relatively low.) You should prepare credentials that ensure your partners of the authenticity of the claim that the destination is indeed your computer.
DNS info is cached at many places. If you have a server in Europe you may want to try to proxy through it
It depends on wether the locatoin is detected by different DNS resolution from different locations, or by IP address that you are browsing from.
If its by DNS, you could just modify your hosts file to point at the server used in europe. Get your friend to ping the address, to see if its different from the one yours resolves to.
To browse from a different IP address:
You can rent a VPS server. You can use putty / SSH to act as a proxy. I use this from time to time to brows from the US using a VPS server I rent in the US.
Having an account on a remote host may or may not be enough. Sadly, my dreamhost account, even though I have ssh access, does not allow proxying.
The only thing that springs to mind for this is to use a proxy server based in Europe. Either have your colleague set one up [if possible] or find a free proxy. A quick Google search came up with http://www.anonymousinet.com/ as the top result.

Resources