Best way to use Slack API script from server - python-3.x

I created some internal tools that use Slack API and after testing them locally using ngrok and serveo, I'd like to use a server to allow other colleagues to use the script.
Do I still need to use ngrok or serveo or any other tunnels to forward the traffic using flask? If not, how would that work? I tried using IP or server name but didn't work.
In case tunnel is needed, what would be the best free tool to use? ngrok, serveo or others?
The problem with ngrok is that the free version expires after 8 hours, meaning everytime the session expires, I need to update Request URL, Options Load URL and Slash Commands URLs in Slack api UI, as the URL changes.
Using Serveo gives the option of keeping the same URL (serveo.net). This although didn't seem to be very stable.
I tried to refresh it adding the command to shell script:
ssh -R 80:localhost:5000 serveo.net
But got this message:
Pseudo-terminal will not be allocated because stdin is not a terminal. Host key verification failed.
Checking online I've tried suggested solutions (mainly using -t -t) but that didn't work, getting:
Could not resolve hostname 80:localhost:5000: Name or service not known

Related

How to create a HTTPS tunnel on my vps for my twitch bot event listen

I found an example on how to use the twitch EventSub webhooks(https://github.com/twitchdev/eventsub-webhooks-node-sample/blob/main/index.js) but i'm struggling with finding out how to setup it up without having to install ngrok or other apps on my PC since i have a vps where i host the bot. I understood the GET method but POST is a bit difficult for me.
Hope i explained it well enough for someone to understand.
Twitch EventSub at time of writing only offers a "Webhook transport"
So you should be able to set this up no problem on your VPS, since your VPS is web accessabile.
To test this locally on your PC yes you will need a proxy/tunnel such as NGROK to make your PC web accessable.
A "webhook transport" (to over simplfy) operates in the same way a login from on a Website does. You fill in the form and hit submit, and the form is POST'ed to the server.
Webhook's it's the same thing, except the data isn't POST'ed as a form but a JSON blob in the body.
So you can use anything capable of receiving a HTTP POST. There are just a few NodeJS examples like the one you have linked kicking about.
TLDR: unless you are testing, skip setting it up on your PC and start with setting it up on your VPS, as the VPS doesn't need a tunnel, apache/nginx are the SSL Terminator that passes to your Node script, if you use a node script link the linked exmaple in the OP

Connecting in a Linux box to AWS-VPN using OKTA Push Authentication

First of all, a rookie, related to VPN/Security issues, so really
forgive me for whatever error I make while describing my problem,
and hope I'm able to make it clear.
Our contractors changed AVIATRIX-OKTA VPN for AWS-VPN with OKTA
Authentication, they send as an .ovpn file, that works ok for
Windows/MAC using AWS-Vpn-Client application software, but a
couple of us using Linux boxes (Ubuntu specifically) run the
described method in AWS which is: openvn config-file.ovpn,
and it does not work.
It simply asks for usr/pwd an then it fails with auth error (we use our OKTA credentials)
, seems nothing is configured to go to OKTA, open a browser or whatever it needs to do.
As an aside note, we can connect without any trouble to our k8s cluster using OKTA
client libraries, no sure is this is useful or not, just in case.
The .ovpn file looks like this
client
dev tun
proto tcp
remote random.cvpn-endpoint-xxxxxx.yyy.clientvpn.us-west-2.amazonaws.com 443
remote-random-hostname
resolv-retry infinite
nobind
persist-key
persist-tun
remote-cert-tls server
cipher AES-256-GCM
verb 5
<ca>
....
....
....
</ca>
auth-user-pass
auth-federate
auth-retry interact
auth-nocache
reneg-sec 0
An interesting thing to notice is that openvpn complains about auth-federate
seems not to recognize it, so I started using gnome network-manager which seems
to accept this configuration, but getting Auth error too.
After this I tried openvpn3 which didn't complain about configuration,
but still getting the same error.
I also tried adding TOPT token to password and the same problem
Any help on how to configure it, or just know if it is possible, will be greatly welcome
, seems there is very little information around this in the net
and we are really stuck on this, we are willing not to change OS or machines as they
are asking to, or using VM just to connect.
Thanks in advance,
We have tried the solution mentioned in the following URL and it worked for us:
https://github.com/samm-git/aws-vpn-client/blob/master/aws-connect.sh
The detailed working of this solution is explained in :https://github.com/samm-git/aws-vpn-client/blob/master/aws-connect.sh.
We have made few changes in the configuration files to make it work.
Removed the following lines in vpn.conf.
auth-user-pass
auth-federate
Made the following change in line 38 in the script aws-connect.sh.
open "$URL"
to
xdg-open "$URL"
Finally I got an answer from AWS people:
If the Client VPN endpoint is configured using SAML-based
authentication (such as Okta), then you have to use the AWS-provided
client to connect:
https://docs.aws.amazon.com/vpn/latest/clientvpn-admin/client-authentication.html#saml-requirements
And the promise to update del client documentation with a WARNING about
this.

How to make a request to a url from a remote linux server?

I need to see the response from a get request of a REST url on a remote linux server. The only access I have to the remote linux server is through putty. Is there a way to go through with it?
I think my question may not be very clear. Let me rephrase.
I need to access the REST url exposed from an ip different from that of this linux server from this linux server that I am connected through putty.
My application runs on this linux server. I am unable to make the REST call using my application (getting EOF exception at webresource.get(ClientResponse.class) method. Since, I dont have access to a browser on the linux server, is there a way to still be able to see if we can get a (JSON) response by making a get request to that REST url from putty using a putty command?
curl is a tool that you can send HTTP requests
curl -X GET $URL
My answer helps if you have ssh access to your linux server.
Linux offers several packages.
The most simple and common one is called wget.
For example if your url is www.google.com, you can execute this command.
wget -qO- www.google.com
Another powerful tool is curl, however you need to install it on your linux. https://www.cyberciti.biz/faq/how-to-install-curl-command-on-a-ubuntu-linux/
It gives you a lot of options such as setting the header of request which might be used for RestAPI authentication.
https://curl.haxx.se/docs/httpscripting.html
A simple example is
curl www.google.com
Hope that helps.
Source: Wget output document and headers to STDOUT

How to use get cf ssh-code password

We are using CF Diego API 2.89 version, Currently I was able to use it and see the vcap and the app resources when running cf ssh myApp.
Now it's become harder :-)
I want to deploy App1 that will "talk" with "APP2"
and have access to to it file system (as it available in the command line when you run ls...) via code (node.js), is it possible ?
I've found this lib which are providing the ability to connect to ssh via code but not sure what I should put inside host port etc
In the connect I provided the password which should be retrieved
via code
EDIT
});
}).connect({
host: 'ssh.cf.mydomain.com',
port: 2222,
username: 'cf:181c32e2-7096-45b6-9ae6-1df4dbd74782/0',
password:'qG0Ztpu1Dh'
});
Now when I use cf ssh-code (To get the password) I get lot of requests which I try to simulate with Via postman without success,
Could someone can assist? I Need to get the password value somehow ...
if I dont provide it I get following error:
SSH Error: All configured authentication methods failed
Btw, let's say that I cannot use CF Networking functionality, volume services and I know that the container is ephemeral....
The process of what happens behind the scenes when you run cf ssh is documented here.
It obtains an ssh token, this is the same as running cf ssh-code, which is just getting an auth code from UAA. If you run CF_TRACE=true cf ssh-code you can see exactly what it's doing behind the scenes to get that code.
You would then need an SSH client (probably a programmatic one) to connect using the following details:
port -> 2222
user -> cf:<app-guid>/<app-instance-number> (ex: cf:54cccad6-9bba-45c6-bb52-83f56d765ff4/0`)
host -> ssh.system_domain (look at cf curl /v2/info if you're not sure)
Having said this, don't go this route. It's a bad idea. The file system for each app instance is ephemeral. Even if you're connecting from other app instances to share the local file system, you can still lose the contents of that file system pretty easily (cf restart) and for reasons possibly outside of your control (unexpected app crash, platform admin does a rolling upgrade, etc).
Instead store your files externally, perhaps on S3 or a similar service, or look at using Volume services.
I have exclusively worked with PCF, so please take my advice with a grain of salt given your Bluemix platform.
If you have a need to look at files created by App2 from App1, what you need is a common resource.
You can inject an S3 resource as a CUPS service and create a service instance and bind to both apps. That way both will read / write to the same S3 endpoint.
Quick Google search for Bluemix S3 Resource shows - https://console.bluemix.net/catalog/infrastructure/cloud_object_storage
Ver 1.11 of Pivotal Cloud Foundry comes with Volume Services.
Seems like Bluemix has a similar resource - https://console.bluemix.net/docs/containers/container_volumes_ov.html#container_volumes_ov
You may want to give that a try.

How to use connect-proxy on CygWin

I am trying to run a node.js program behind a corporate firewall. However, I am unable to explicitly tell node.js which proxy to use and therefore all my external connections time out.
I read on a some post that I could use connect-proxy as an HTTP proxy for my tunneling needs, but I have no idea how to actually use it.
I want to run the following:
$ node program.js
using connect-proxy.
The only command I was able to get so far is this:
$ connect-proxy -H myproxy.com:8083 google.com
GET
HTTP/1.0 302 Found
Location: http://www.google.com/
...
Before going further it is worth trying the environment variable that a number of other languages and tools support.
export http_proxy=http://proxyserver:port
Often they use port 8080 but check the Javascript in the PAC file loaded by your browser to be sure.
If that produces a different result but still doesn't connect, you probably need to do NTLM auth with the proxy and the only way I know to do this is to run NTLMAPS before running your app. If you are really interested in getting this working transparently, then porting NTLMAPS to Javascript should do the trick.

Resources