Connecting in a Linux box to AWS-VPN using OKTA Push Authentication - linux

First of all, a rookie, related to VPN/Security issues, so really
forgive me for whatever error I make while describing my problem,
and hope I'm able to make it clear.
Our contractors changed AVIATRIX-OKTA VPN for AWS-VPN with OKTA
Authentication, they send as an .ovpn file, that works ok for
Windows/MAC using AWS-Vpn-Client application software, but a
couple of us using Linux boxes (Ubuntu specifically) run the
described method in AWS which is: openvn config-file.ovpn,
and it does not work.
It simply asks for usr/pwd an then it fails with auth error (we use our OKTA credentials)
, seems nothing is configured to go to OKTA, open a browser or whatever it needs to do.
As an aside note, we can connect without any trouble to our k8s cluster using OKTA
client libraries, no sure is this is useful or not, just in case.
The .ovpn file looks like this
client
dev tun
proto tcp
remote random.cvpn-endpoint-xxxxxx.yyy.clientvpn.us-west-2.amazonaws.com 443
remote-random-hostname
resolv-retry infinite
nobind
persist-key
persist-tun
remote-cert-tls server
cipher AES-256-GCM
verb 5
<ca>
....
....
....
</ca>
auth-user-pass
auth-federate
auth-retry interact
auth-nocache
reneg-sec 0
An interesting thing to notice is that openvpn complains about auth-federate
seems not to recognize it, so I started using gnome network-manager which seems
to accept this configuration, but getting Auth error too.
After this I tried openvpn3 which didn't complain about configuration,
but still getting the same error.
I also tried adding TOPT token to password and the same problem
Any help on how to configure it, or just know if it is possible, will be greatly welcome
, seems there is very little information around this in the net
and we are really stuck on this, we are willing not to change OS or machines as they
are asking to, or using VM just to connect.
Thanks in advance,

We have tried the solution mentioned in the following URL and it worked for us:
https://github.com/samm-git/aws-vpn-client/blob/master/aws-connect.sh
The detailed working of this solution is explained in :https://github.com/samm-git/aws-vpn-client/blob/master/aws-connect.sh.
We have made few changes in the configuration files to make it work.
Removed the following lines in vpn.conf.
auth-user-pass
auth-federate
Made the following change in line 38 in the script aws-connect.sh.
open "$URL"
to
xdg-open "$URL"

Finally I got an answer from AWS people:
If the Client VPN endpoint is configured using SAML-based
authentication (such as Okta), then you have to use the AWS-provided
client to connect:
https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/client-authentication.html#saml-requirements
And the promise to update del client documentation with a WARNING about
this.

Related

How to create a HTTPS tunnel on my vps for my twitch bot event listen

I found an example on how to use the twitch EventSub webhooks(https://github.com/twitchdev/eventsub-webhooks-node-sample/blob/main/index.js) but i'm struggling with finding out how to setup it up without having to install ngrok or other apps on my PC since i have a vps where i host the bot. I understood the GET method but POST is a bit difficult for me.
Hope i explained it well enough for someone to understand.
Twitch EventSub at time of writing only offers a "Webhook transport"
So you should be able to set this up no problem on your VPS, since your VPS is web accessabile.
To test this locally on your PC yes you will need a proxy/tunnel such as NGROK to make your PC web accessable.
A "webhook transport" (to over simplfy) operates in the same way a login from on a Website does. You fill in the form and hit submit, and the form is POST'ed to the server.
Webhook's it's the same thing, except the data isn't POST'ed as a form but a JSON blob in the body.
So you can use anything capable of receiving a HTTP POST. There are just a few NodeJS examples like the one you have linked kicking about.
TLDR: unless you are testing, skip setting it up on your PC and start with setting it up on your VPS, as the VPS doesn't need a tunnel, apache/nginx are the SSL Terminator that passes to your Node script, if you use a node script link the linked exmaple in the OP

Best way to use Slack API script from server

I created some internal tools that use Slack API and after testing them locally using ngrok and serveo, I'd like to use a server to allow other colleagues to use the script.
Do I still need to use ngrok or serveo or any other tunnels to forward the traffic using flask? If not, how would that work? I tried using IP or server name but didn't work.
In case tunnel is needed, what would be the best free tool to use? ngrok, serveo or others?
The problem with ngrok is that the free version expires after 8 hours, meaning everytime the session expires, I need to update Request URL, Options Load URL and Slash Commands URLs in Slack api UI, as the URL changes.
Using Serveo gives the option of keeping the same URL (serveo.net). This although didn't seem to be very stable.
I tried to refresh it adding the command to shell script:
ssh -R 80:localhost:5000 serveo.net
But got this message:
Pseudo-terminal will not be allocated because stdin is not a terminal. Host key verification failed.
Checking online I've tried suggested solutions (mainly using -t -t) but that didn't work, getting:
Could not resolve hostname 80:localhost:5000: Name or service not known

How to use get cf ssh-code password

We are using CF Diego API 2.89 version, Currently I was able to use it and see the vcap and the app resources when running cf ssh myApp.
Now it's become harder :-)
I want to deploy App1 that will "talk" with "APP2"
and have access to to it file system (as it available in the command line when you run ls...) via code (node.js), is it possible ?
I've found this lib which are providing the ability to connect to ssh via code but not sure what I should put inside host port etc
In the connect I provided the password which should be retrieved
via code
EDIT
});
}).connect({
host: 'ssh.cf.mydomain.com',
port: 2222,
username: 'cf:181c32e2-7096-45b6-9ae6-1df4dbd74782/0',
password:'qG0Ztpu1Dh'
});
Now when I use cf ssh-code (To get the password) I get lot of requests which I try to simulate with Via postman without success,
Could someone can assist? I Need to get the password value somehow ...
if I dont provide it I get following error:
SSH Error: All configured authentication methods failed
Btw, let's say that I cannot use CF Networking functionality, volume services and I know that the container is ephemeral....
The process of what happens behind the scenes when you run cf ssh is documented here.
It obtains an ssh token, this is the same as running cf ssh-code, which is just getting an auth code from UAA. If you run CF_TRACE=true cf ssh-code you can see exactly what it's doing behind the scenes to get that code.
You would then need an SSH client (probably a programmatic one) to connect using the following details:
port -> 2222
user -> cf:<app-guid>/<app-instance-number> (ex: cf:54cccad6-9bba-45c6-bb52-83f56d765ff4/0`)
host -> ssh.system_domain (look at cf curl /v2/info if you're not sure)
Having said this, don't go this route. It's a bad idea. The file system for each app instance is ephemeral. Even if you're connecting from other app instances to share the local file system, you can still lose the contents of that file system pretty easily (cf restart) and for reasons possibly outside of your control (unexpected app crash, platform admin does a rolling upgrade, etc).
Instead store your files externally, perhaps on S3 or a similar service, or look at using Volume services.
I have exclusively worked with PCF, so please take my advice with a grain of salt given your Bluemix platform.
If you have a need to look at files created by App2 from App1, what you need is a common resource.
You can inject an S3 resource as a CUPS service and create a service instance and bind to both apps. That way both will read / write to the same S3 endpoint.
Quick Google search for Bluemix S3 Resource shows - https://console.bluemix.net/catalog/infrastructure/cloud_object_storage
Ver 1.11 of Pivotal Cloud Foundry comes with Volume Services.
Seems like Bluemix has a similar resource - https://console.bluemix.net/docs/containers/container_volumes_ov.html#container_volumes_ov
You may want to give that a try.

Why am I getting SSL_read errors and Rpc_client_frag_read errors when trying to Remote Desktop

I'm trying to set up a remote desktop session for monitoring specific systems at my place of work. I only have access to a Linux machine and I need to connect via a terminal server gateway. I am using FreeRDP to do this and i am using the following command to create the connection:
xfreerdp /d:** /u:***** /p:******* /g:******.************.***
/v:****.*********.***** /port:3389 /size:1920x1080
I have hidden all connection details per my supervisors request however both he and I verified the correct information is entered into the fields.
When I send the connection through I get the following error:
Connected to ******.************.***:443
Connected to ******.************.***:443
TS Gateway Connection Success
Got stub length 4 with flags 3 and called 7
Got stub length 4 with flags 3 and called 6
SSL_read: I/O error: connection reset by peer (104)
Rpc_client_frag_read: error reading header
Would anyone have any idea of what I might be missing? I have even tried adding
/sec:rdp
to the script and even that produced the same error
Try rdp from a Windows system (or have someone else try from their system, since you don't have direct access to Windows). I know it won't solve your problem, but it may give you better information. I'm in a similar situation and got the same error message. I tried remmina instead of xfreerdp and got even less information than xfreerdp spits out.
From a Windows VM, at least I could tell when I got my domain\username & password right -- it told me my account was not allowed rdp access to that server. I'm figuring that means that there are accounts that can rdp in, but mine is not among them. Along the way, though, I found that the remote was using a certificate from an untrusted authority, which was useful information for my case.
If your Linux is old or hasn't been updated, do so. Your certificate store may be out of date. But it may also be that your company's Windows domain has certificates that Linux doesn't know about. It could be a simple matter that you're lacking the company-supplied cert (because they push it to all Windows machines on the domain, but your Linux machine doesn't get that "benefit").

Connecting Azure SAP- or SharePoint-Connector to OnPremise fails

I am trying to connect Azure to our OnPremise-SAP-Installation. Our target: calling an RFC via SAP-Connector within a LogicApp.
What we did so far:
Created a Relay-ServiceBus.
Created a default SAP-Connector available in Azure Marketplace and inserted all required information including the ServiceBus-ConnectionString.
For testing purpose: Created a new Windows Server VM onPrem:
Enabled IIS
Disabled Windows-Firewall
Installed SAP-Libraries required by the HybridConnector.
Than we downloaded and installed the HybridListener on the Windows Server and entered the required ConnectionString.
Basically it was pretty much straight-forward according to this article:
http://azure.microsoft.com/de-de/documentation/articles/app-service-logic-integrate-with-an-on-premise-sap-server/
(Maybe except installing the SAP Libraries which is a bit weak documented..)
After all that installation process we went back into our Azure Portal. Suprisingly the SAP-Connector still told us: "On-Premise Setup Incomplete"
Our biggest problem: there are no other information available. Why is the Setup incomplete? Did we entered some wrong configuration or is there a network issue?
After some time we found out that we also need to open the following outgoing ports:
9350 to 9354
443
Unfortunately this was documented at a different place: https://msdn.microsoft.com/en-us/library/azure/ee706729.aspx
But the connection is still not working, same error as above: "On-Premise Setup Incomplete" And yes, we did reboot the IIS as well as the whole system.
My Question now: is there any possibilty to find the reason for this situation? A couple of weeks ago we had the same issue with an SharePoint-Connector which is still not running.
Is there any kind of HybridConnector-Logfile on the Server or something similar that helps us the figure out the real problem? Or maybe did someone had the same problem in the past and has some advice?
Thanks in advance!
EDIT: Hybrid Connection is now online!
I just had to change writing permissions for the HybridListenerAppPool:
C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Aspnet_regiis.exe -ga "IIS AppPool\HybridListenerAppPool"
Solution found: http://forums.asp.net/t/1566987.aspx and IIS7 folder permissions for web application.
But it is still not possible to use the SAP Connector within a LogicApp:
After analyzing the Log of the AppService Gateway I found a hint telling me to look at the SwaggerFile of the SAP Connector:
I really do not understand why the HybridConnection is fine but there is still no Listener connected.
After some firewall forensics, we actually figured out that there is some outgoing traffic on ports 5671 and 5672. If someone else faces the same problem, you need to open all the following outgoing TCP ports:
443
5671 - 5672
9350 - 9354
Unfortunatley it looks like this is not documeted at all.

Resources