How to use get cf ssh-code password - node.js

We are using CF Diego API 2.89 version, Currently I was able to use it and see the vcap and the app resources when running cf ssh myApp.
Now it's become harder :-)
I want to deploy App1 that will "talk" with "APP2"
and have access to to it file system (as it available in the command line when you run ls...) via code (node.js), is it possible ?
I've found this lib which are providing the ability to connect to ssh via code but not sure what I should put inside host port etc
In the connect I provided the password which should be retrieved
via code
EDIT
});
}).connect({
host: 'ssh.cf.mydomain.com',
port: 2222,
username: 'cf:181c32e2-7096-45b6-9ae6-1df4dbd74782/0',
password:'qG0Ztpu1Dh'
});
Now when I use cf ssh-code (To get the password) I get lot of requests which I try to simulate with Via postman without success,
Could someone can assist? I Need to get the password value somehow ...
if I dont provide it I get following error:
SSH Error: All configured authentication methods failed
Btw, let's say that I cannot use CF Networking functionality, volume services and I know that the container is ephemeral....

The process of what happens behind the scenes when you run cf ssh is documented here.
It obtains an ssh token, this is the same as running cf ssh-code, which is just getting an auth code from UAA. If you run CF_TRACE=true cf ssh-code you can see exactly what it's doing behind the scenes to get that code.
You would then need an SSH client (probably a programmatic one) to connect using the following details:
port -> 2222
user -> cf:<app-guid>/<app-instance-number> (ex: cf:54cccad6-9bba-45c6-bb52-83f56d765ff4/0`)
host -> ssh.system_domain (look at cf curl /v2/info if you're not sure)
Having said this, don't go this route. It's a bad idea. The file system for each app instance is ephemeral. Even if you're connecting from other app instances to share the local file system, you can still lose the contents of that file system pretty easily (cf restart) and for reasons possibly outside of your control (unexpected app crash, platform admin does a rolling upgrade, etc).
Instead store your files externally, perhaps on S3 or a similar service, or look at using Volume services.

I have exclusively worked with PCF, so please take my advice with a grain of salt given your Bluemix platform.
If you have a need to look at files created by App2 from App1, what you need is a common resource.
You can inject an S3 resource as a CUPS service and create a service instance and bind to both apps. That way both will read / write to the same S3 endpoint.
Quick Google search for Bluemix S3 Resource shows - https://console.bluemix.net/catalog/infrastructure/cloud_object_storage
Ver 1.11 of Pivotal Cloud Foundry comes with Volume Services.
Seems like Bluemix has a similar resource - https://console.bluemix.net/docs/containers/container_volumes_ov.html#container_volumes_ov
You may want to give that a try.

Related

Connecting in a Linux box to AWS-VPN using OKTA Push Authentication

First of all, a rookie, related to VPN/Security issues, so really
forgive me for whatever error I make while describing my problem,
and hope I'm able to make it clear.
Our contractors changed AVIATRIX-OKTA VPN for AWS-VPN with OKTA
Authentication, they send as an .ovpn file, that works ok for
Windows/MAC using AWS-Vpn-Client application software, but a
couple of us using Linux boxes (Ubuntu specifically) run the
described method in AWS which is: openvn config-file.ovpn,
and it does not work.
It simply asks for usr/pwd an then it fails with auth error (we use our OKTA credentials)
, seems nothing is configured to go to OKTA, open a browser or whatever it needs to do.
As an aside note, we can connect without any trouble to our k8s cluster using OKTA
client libraries, no sure is this is useful or not, just in case.
The .ovpn file looks like this
client
dev tun
proto tcp
remote random.cvpn-endpoint-xxxxxx.yyy.clientvpn.us-west-2.amazonaws.com 443
remote-random-hostname
resolv-retry infinite
nobind
persist-key
persist-tun
remote-cert-tls server
cipher AES-256-GCM
verb 5
<ca>
....
....
....
</ca>
auth-user-pass
auth-federate
auth-retry interact
auth-nocache
reneg-sec 0
An interesting thing to notice is that openvpn complains about auth-federate
seems not to recognize it, so I started using gnome network-manager which seems
to accept this configuration, but getting Auth error too.
After this I tried openvpn3 which didn't complain about configuration,
but still getting the same error.
I also tried adding TOPT token to password and the same problem
Any help on how to configure it, or just know if it is possible, will be greatly welcome
, seems there is very little information around this in the net
and we are really stuck on this, we are willing not to change OS or machines as they
are asking to, or using VM just to connect.
Thanks in advance,
We have tried the solution mentioned in the following URL and it worked for us:
https://github.com/samm-git/aws-vpn-client/blob/master/aws-connect.sh
The detailed working of this solution is explained in :https://github.com/samm-git/aws-vpn-client/blob/master/aws-connect.sh.
We have made few changes in the configuration files to make it work.
Removed the following lines in vpn.conf.
auth-user-pass
auth-federate
Made the following change in line 38 in the script aws-connect.sh.
open "$URL"
to
xdg-open "$URL"
Finally I got an answer from AWS people:
If the Client VPN endpoint is configured using SAML-based
authentication (such as Okta), then you have to use the AWS-provided
client to connect:
https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/client-authentication.html#saml-requirements
And the promise to update del client documentation with a WARNING about
this.

linux redirect localhost port to url port

I need to redirect localhost:8080 to http://url:8080/.
Some background:
I am using docker swarm stack services. One service (MAPS) creates a simple http server that lists xml files to port 8080 and another service (WAS) uses WebSphere Application Server that has a connector that uses these files, to be more precise it calls upon a file maps.xml that has the urls of the other files as http://localhost:8080/<file-name>.xml.
I know docker allows me to call on the service name and port within the services, thus I can use curl http://MAPS:8080/ from inside my WAS service and it outputs my list of xml files.
However, this will not always be true. The prod team may change the port number they want to publish or they might update the maps.xml file and forget to change localhost:8080 to MAPS:8080.
Is there a way to make it so any call to localhost:8080 gets redirected to another url, preferrably using a configuration file? I also need it to be lightweight since the WAS service is already quite heavy and I can't make it too large to deploy.
Solutions I tried:
iptables: Installed it on the WAS service container but when I tried using it it said my kernel was outdated
tinyproxy: Tried setting it up as a reverse proxy but I couldn't make it work
ncat with inetd: Tried to use this solution but it also didn't work
I am NO expert so please excuse any noob mistakes I made. And thanks in advance!
It is generally not a good idea to redirect localhost to another location as it might disrupt your local environment in surprising ways. Many packages depend on localhost being localhost :-)
it is possible to add MAPS to your hosts file (/etc/hosts) giving it the address of maps.

How to create web based terminal using xterm.js to ssh into a system on local network

I came across this awesome library xterm.js which is also the base for Visual Studio Code's terminal. I have a very general question.
I want to access a machine(ssh into a machine ) on a local network through a web based terminal(which is out of network, may be on a aws server). I was able to do this in a local network successfully but I could not reach to a conclusion to do it from Internet-->local network .
As an example - An aws server running the application on ip 54.123.11.98 which has a GUI with a button to open terminal. I want to open terminal of a local machine which is in a local network somewhere behind some public ip on local ip 192.168.1.7.
Can the above example be achieved using some sort of solutions where i can use xterm.js so that I don't have to go for building a web based terminal? What are the major security concerns I should keep in mind while exposing the terminals this way ?
I was thinking in line with using a fixed intermediate server between AWS and local network ip and use some sort of reverse ssh tunnel process to do this but I am not sure if this is the right way or could there be a more simple/better way to achieve this.
I know digital ocean, google cloud , they all do this but they have to connect to a computer which has public ip while I have a machine in a local network. I don't really want to configure router to do any kind of setup .
After a bit of research here is working code.
Libraries:
1) https://socket.io/
This library is used for transmit package from client to server.
2) https://github.com/staltz/xstream
This library is used for terminal view.
3) https://github.com/mscdex/ssh2
This is the main library which is used for establishing a connection with your remote server.
Step 1: Install Library 3 into your project folder
Step 2: Start from node side create a server.js file for open socket
Step 3:
Connection client socket to node server (both are in local machine)
The tricky logic is how to use socket and ssh2.
On emission of socket you need to trigger an SSH command using the ssh2 library. On response of the ssh2 library (from server) you need to transmit the socket package to the client. That's it.
Click here to find an example.
That example will have these files & folders:
Type Name
------------
FILE server.js
FILE package.json
FOLDER src
FOLDER xtream
First you need to configure your server IP, user and password or cert file on server.js and just run node server.js.
P.S.: Don't forget to run npm install
Let me know if you have any questions!
After some research later I came across this service : https://tmate.io/ which does the job perfectly. Though if you need a web-based terminal of tmate you have to use their ssh servers as a reverse proxy which ideally I was not comfortable with. However, they provide tmate-server which can be used to host your own reverse proxy server but lacks web UI. But to build a system where you have to access a client behind NAT over ssh on web, below are the steps.
Install and configure tmate-server on some cloud machine.
Install tmate on the client side and configure to connect to a cloud machine.
Create a nodejs application using xterm.js(easy because of WebSocket based communication) which connects to your tmate-server and pass commands to the respective client. (Beware of security issues of exposing this application, since you will be passing Linux commands ).
Depending on your use case you might need a small wrapper around tmate client on client-side to start/stop it automatically or via some UI/manual action.
Note: I wrote a small wrapper on client-side as well to start/stop and pass on the required information to an API server (written in nodejs) which then pass on the information to another API which connects the browser to the respective client session. Since we had written this application it included authentication as well as command restrictions of what can be run on terminal. You can customize it a lot.

NodeJS - securely connect to external redis server

On my main server, I fetch data from an external/seperate redis server which is accessed through an api https://localhost:7000/api/?token=**** which works. However token and api is not secure. And since I want to have redis server to be separate, this technique isn't suited for my case.
In my case I want to have 2 independent servers A and B.
A should load data from B without using an api or url call... Instead it should use port (e.g. //server:123). This way server B can only be accessed from A.
I want this approach to work for both development and production. AWS has "Server Groups" I believe, but that's production only...
So is there a way to create this kind of connection with nodejs? I also want to know if this is only possible having a running server already, since I don't have one yet.
Note: In case you are wondering, I use redis to store private keys for encryption, so I need a secure, separate server which can be controlled independently
It is not very clear what you're trying to do since accessing data from another server without using an API does not really make sense. Anything you do to access it is some type of API.
If you want to make it so that only server A can access server B, then you have a number of choices to make that secure:
Require authentication whenever server B is accessed and make it so that only server A has those authentication credentials.
Assuming server A and server B are in your same server infrastructure, put the server B API on a port that is not available to the outside world, but is only available from within your server infrastructure (this usually involves picking a port that your firewall to the outside is blocking access to).
On server A, only accept connections on its API from the specific IP address of server B.
You can even implement more than one of these options at once. For example, it's not uncommon to use 1) and 2) together.
Stunnel is built for that ! basically speaking it's a vpn ! but not for machines for ports ! it's a bit complicated , you will have to deal with certificates and a couple of other things (config both servers...) but once it's done it's a breeze to launch and reuse (just launch a file) give it a try !
and see this link : https://www.digitalocean.com/community/tutorials/how-to-set-up-an-ssl-tunnel-using-stunnel-on-ubuntu
you should also consider adding an ip table rule at the database server to allow access to your server only.
Edit:
Keep in mind that redis was designed to be used in a trusted environment . This means that the security layer will not be redis itself but a third party software that u'll need to setup.
For dev purpose no need to make this bulletproof. And even if you want to , it's kinda hard to do . because the security of your app is mainly depending on the infrastructure of the company that will host your app.
That being said , if you want to secure a redis instance in a localhost environment . a rule at the ip table allowing only the localhost to access your port 6379 will be suffcient.
The other thing that could compromise the security of your redis DB is the app itself . An important aspect of this is to validate EVERYTHING , it should be a good start.
Finally if you want to dive a bit deeper take a look at this link
https://www.digitalocean.com/community/tutorials/how-to-secure-your-redis-installation-on-ubuntu-14-04
hope this helps !

502 Bad Gateway on Google Compute Engine, Nginx, and Meteor

I built a Meteor app and would like to run it on Google Compute Engine. I followed the guide found here to get a basic instance of my app up and running, changing the disk size/type, the instance type, the disk and instance zones (to both match where I live), and added export METEOR_SETTINGS={ ... } to the second to last line of the startup.sh file.
Everything seemed to work fine, I have the persistent disk and vm instance listed on my Google Cloud dashboard. I created a new firewall rule on my default network to tcp:80 and tcp:443 incoming traffic, and now when I access the instance's external IP address from my browser, I'm shown a 502 Bad Gateway nginx/1.8.0 page (where I'd instead expect the homepage of my Meteor app).
Is there anything in the configuration details in the startup.sh file that I'm missing or should modify? Could there be an issue with how the compute vm instance is communicating with the persistent disk? Frankly I'm far outside of my domain with this type of thing.
After ssh-ing into my instance and dinking around a bit, I called export ROOT_URL='<the_instances_external_ip>', rather than 'http://localhost', at which point everything started to work. It's unfortunate how little documentation there is on getting Meteor apps configured and running in production (I only thought to mess with ROOT_URL after searching something unrelated), so hopefully this will at least be helpful for someone else.

Resources