Running docker containers each with socsk5 proxy - linux

Intro:
I have an app, when I run it, it connects to a server and shares its bandwidth, (basically gives out the public IP for the server to use),
Development:
Now I wanted that app to use transparent socks5 proxy for sending and receiving requests. This was possible when I downloaded the app (non containerized version) on ubuntu and configured red-socks and iptables in it, the server the app connected to assumed the app's public IP was the one I mentioned (socks5://user:pass#ip:port) (i own these proxy IPs too)
Requirement:
Next, I wanted to scale it out, so I looked for a docker image of this app, it was available so I downloaded and ran it, it worked fine, but the IP the server received was the non proxy IP (obviously as I have not configured any proxy).
My proposed idea was to run multiple docker containers,
but I still don't know how will I configure a different socks5 proxy for each of these containers.
Can someone advise me on how to go forward with this.
Thank you in advance :D
Edits: I had mentioned I was trying to do what I did with the ubuntu system on a ubuntu docker container, removed that and organized the whole situation.

Related

Deploy React website with node.js backend

I have developed website in react and node.js as intermediary to send and receive response.
Now i want to put this on a machine in lab , so everyone can access it.
Can i Install node on machine in lab and run same setup as on my dev machine and give ip of that server?
I tried searching deployment with node js an react, but everyone using AWS, or some external server after npm build.
How do I deploy my react app on lab machine with out actually copying code?
Two Things before you share your IP and PORT:
Have a process monitor setup in your lab machine, Check out pm2. This will help you run your application in the background,
monitor your application and start your application after restart.
Enable port in your firewall for everyone to use. if your lab system is Ubuntu then allow ufw for the port.
Now you can share your IP:PORT and everyone would be able to access it, provided they all are connected to the same network.
If you want everybody access your server, you can share your private IP with them, so others can access it using
yourip:port

Host "localhost"overwrites "localhost"in docker

I`m trying to run a vue dashboard in docker container and a service on my local machine. both are binded to "localhost:80". When running both the dashboard and the service in docer, everything works fine, but when running the dashboard in docker and the second service on host machine, the host machine somehow overwrites the access to the docker localhost. so the expected behaviour:
- http://localhost -> should load the dashboard
- http://localhost/graphql -> should load the api for graphql from the second service
so these are working when both running in docker, but not working when one is in docker and the second one running on hist.
Any idea how to solve the issue? The reason why I need to have the second service running on host is that I can debug and code quicker instead of building image after each code change and updating the stack.
Thanks, Zoli.
localhost inside docker refers to the docker container itself. You can't access the actual host from inside docker with that.
However, your actual host has an IP address in the docker network. You can access your actual host using that IP. You can get it by doing ifconfig. Look for a docker interface. In my machine the actual host has the IP 172.17.0.1
problem solved, on host I had to change the port and it started to work. so when both services are running in docker, the port 80 can be assigned to 2 containers and will be resolved correctly. but when one is running it from docker and another on host, it will not work. that is my explanation to this, maybe someone can give a better explanation, but the problem is solved now. thanks.

Can't connect remotely to Jenkins being run on a Debian 8 VM

I've recently set up a Debian 8 Jessie VM on Google Cloud. I've installed Jenkins and have the service up and running(verified by "sudo service jenkins status"), yet I can't connect to the VM's external IP from another machine. I used to run Jenkins from my personal computer until I decided I needed a dedicated server to run it continuously. When I was running it on my personal machine I would just access localhost:8080 and the Jenkins dashboard would load fairly quickly. However, upon trying to access the external IP address of the VM running Jenkins, I'm usually greeted with "Connection refused" in my web browser.
At the suggestion of most posts I've seen regarding such issues, I've lifted all firewalls on the VM and have tried to ensure that the VM is listening at the correct IP address, but nothing seems to be able to change the outcome presented by my browser. Where does the issue most likely reside: the VM, Google Cloud, or Jenkins? I'm at a loss.
My first guess is a connection/firewall issue. To test this, you could try a port forward using SSH: SSH into your server with a local port forward: ssh -L 8080:localhost:8080 yourserver. You should then be able to direct your web browser at http://localhost:8080/ and your packets flow through the SSH connection. If that makes it work, have a good look at
How to open a specific port such as 9090 in Google Compute Engine . Or better yet, if you are the only one to use that Jenkins server, just keep using the SSH tunnel. It's much more secure than opening jenkins to the public world.
Have you tried installing tcpdump on the VM and doing a packet capture? That way you can determine where the traffic is being dropped. If you don't see any traffic, then it is being dropped somewhere in the cloud before it gets to your VM. If you are seeing traffic, then you need to determine is it Jenkins or some agent on the host (perhaps a firewall but you mentioned you cleared all the rules) ... I would suggest stopping the Jenkins service and then trying to access it again. Do you get the same "Connection Refused" message? If so, then it is something on the VM. If not, then it something at the application layer, i.e. Jenkins.
Happy hunting!!!

Making multiple Docker Machines accessible across local network. Linux & Mac

I know there are several questions similar to this, but as far as I can see there's not an answer for the setup that I can get to work, and as far as documentation goes I'm a bit lost.
My goal is to set up a linux development server on the local network which I can run multiple docker machines / containers on for each of our projects.
Ideally, I would create a docker-machine on the dev box, and then be able to access that from any of my local network machines. I can run docker on the linux box directly and access by publishing the ports, but I want to run multiple machines with different ip addresses so that we can have multiple VMs running (multiple projects).
I've looked at Docker Swarm and overlay networks and just not been able to find a single tutorial or set of instructions to get this sort of set up running.
So I have a dev box at 192.168.0.101 with docker-machine on. I want to create a new machine, run nginx on it, and then access nginx from another machine on the local network i..e http://192.168.99.1/ then set up another and access that too at say http://192.168.99.2/.
If anyone has managed to do this i'd be interested to know how.
One way I've been thinking about doing it, is running nginx on the local host on the dev box, and set up config rules to proxy to the local machines, unsure how well this would work (it works for web servers, but what if I want to ssh or bash into one of those machines, or if one has a mysql container I want to connect to)
Have you considered running your docker machines inside LXD containers?
Stepane Grabers site has a lot of relevant information
https://stgraber.org/category/lxd/
The way that I resolved this, is by using a NAT on the linux box, and then assigning a different ip to each machine. I followed the instructions here; http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/ which finally got me to be able to share multiple docker machines using the same ports (80), on different ips.

Setting up a web server for access outside of subnetwork (Node.js, Nginx maybe, Ubuntu server)

A little bit of context. I have developped a webapp on node.js (and a glamourous set of extensions). It has been approved for testing with true users at my company and i am supposed to deploy it now. Problem is that basically i have no idea unto how attack this problem. I have so many questions.
For the moment i have created a virtual machine on the local server. I have installed ubuntu server unto it and i have the intuition about how to deploy the app in this part (i suppose following the same steps as when i started to work on this project). I do not know however if i can have remote access from the outside of my network to this virtual machine. I also dont know if additional configuration in ubuntu's side is needed to make such an idea work (for example: in the installation there was a part about proxies that at the moment i decided to ignore)
From the few documents i have read about it since i was assigned this, a solution may lie in using nginx. The logic behind it if i am not mistaken (and please correct me if i am) is that nginx can help linking the HTTP requests (through the port 80 which is normally opened for access in most machines) and link it to a specific port on the machine (The sexy app i have developped).
In a more early stage, what ressources would i need to start this off? Would i need a domain name? IS it necessary? Do i need a different virtual server to link the apps or can they be on the same machine?
If you have additional comments or tips for someone that is learning to do this kind of thing, please do.
For remote access, you will need a couple of things. First of all, you will need to make sure that your virtual machine is on a bridged adapter. I'm not sure what virtual machine you are on, or I'd give you more detail on how to do this. Second, you will need to make sure that your router has port 80 (or whatever port you chose to use) setup via port forwarding so that requests coming in map to the server (a request comes to the router on the port, the router must then know where to send those requests to). Finally, if you want to use a port other than port 80, you should be able to configure this in the nodejs configuration. This may also be configurable in the router so that requests coming in on port 80 are mapped to, say 8080, but, given that this is a company, it's probably easier to reconfigure the nodejs server than have it set up special mapping.
This experience comes from personal experience with hosting web servers at home. Corporate routers should need similar configuration unless each system has a public IP address on the internet, which is unlikely.

Resources