FortiClient headless linux cli - how to install and configure to handle certain IP ranges only or permit SSH - linux

I am trying to configure the headless VPN only FortiClient on an AWS ubuntu 20.04 ec2 instance, and though I am able to connect to the target, I am then disconnected from the instance and cannot progress.
Setup:
wget http://cdn.software-mirrors.com/forticlientsslvpn_linux_4.4.2328.tar.gz
tar -xzvf forticlientsslvpn_linux_4.4.2328.tar.gz
cd ./forticlientsslvpn/64bit/helper
sudo ./setup.linux.sh
# Accept license
cd ..
./forticlientsslvpn_cli --server serveraddress:port --vpnuser username
# Enter password
##Connected!
At this stage, I am booted out of the instance and cannot reconnect (requiring a soft restart of the instance to gain access again)
I can see that there is a configuration file at forticlientsslvpn/64bit/helper/config but I cannot find any documentation describing what can be configured there or whether it is something I should be concerned with.
The CLI itself doesn't take any other options other than:
forticlientsslvpn_cli [--proxy proxyaddress:proxyport] --server vpnserveraddress:vpnport [--proxyuser proxyuser] [--vpnuser vpnuser] [--pkcs12 pkcs12path] [--keepalive]
I would like to either:
Preserve my original SSH connection (and any future connections) so I can develop within the VPN or;
Limit the VPN to only package traffic that is going to a specific IP range (CIDR block)
I have found three different methods for installing the client (sudo apt install forticlient, sudo apt install -y openfortivpn, see above) and cannot navigate through them. I have looked into FortiClientLinuxGuide and installed that tool but couldn't find out how to configure it as a VPN instead (or where to add the configuration). Similar experience with the second one.
This seems to be the only documentation about how to configure the CLI and its just the bear minimum How to setup and install SSLVPN.
This post seems to be having the same problem ssh-telnet-disconnects and the solution looks like it would work if only I knew how to set that configuration.
alternatively, I have looked up split tunnel configuration which looks like it would be ideal but cannot work out how I would set that up. The documentation is only via the GUI Enable-split-tunnel-feature

Related

Setting up SonarQube on AWS using EC2

Trying to setup SonarQube on EC2 using what should be basic install settings.
List item
Setup a standard EC2 AWS LINUX Ami attached to M4 large
SSH into EC2 instance
Install JAVA
Set to use JAVA8
wget https://sonarsource.bintray.com/Distribution/sonarqube/sonarqube-6.4.zip
unzip into the /etc dir
run sudo ./sonar.sh start
Instance starts
But when I try to go to the app it never comes up when I try either the IPv4 Public IP 187.187.87.87:9000 (ex not real IP) or try ec2-134-73-134-114.compute-1.amazonaws.com:9000 (not real IP either just for example)
Perhaps it is my ignorance or me not configuring something correctly as it pertains to the initial EC2 setup.
If anyone has any ideas, please let me know.
Issue was that SonarQube default port is 9000. and by default this port is not open in the security group if you dont apply the default security group in which all the ports are open(which is Not recommended).
As suggested in comment #Issac, opened the 9000 port to allow incoming request to SonarQube, in AWS security group setting of instance. Which solved the issue.
need to have an db and give permissions to the db insonar.properties file in sonar nd need to open firewalls

How do I deploy code to hardware nodes that are all on seperate networks?

This is an interesting problem I've been thinking about recently and have not come up with or found a solution that I find acceptable.
I'm playing with Raspberry Pi's and have 6 currently that I want to use throughout a few of my personal properties for surveillance purposes.
Making them work and sending video streams to my server is all easy, well and good - but how in the world do I deploy code updates to these "nodes" that are not on the same network, some are behind wi-fi networks that I don't have port forwarding access to also, so it's not like I can just post forward and SSH into them and run some .sh magic to update things.
The best I've come up with is using autossh to keep a constant, connection open to one of my servers through reverse ssh, and then ssh in to them through my parent server in parallel and running a .sh script on them when I want to update.. But this seems overly excessive and I'm sure there's some solution or platform out there that exists to solve this - how else do companies like Redbox or Nest for example update firmware on their systems remotely?
I'm actually doing something similar. I have Pi's deployed around the city that I live in. In order to not have to worry about port-forwarding and people changing their router configurations, I started using a service called Pagekite http://pagekite.net/
I'm not affiliated with them, but I can't say enough good things about the service and price. My Pi's are hooked up to screens that need to display certain things at certain times, and I'm able to VNC in very easily no matter where the Pi is to see what's currently playing. I can obviously just ssh in as well.
The following steps from my pi setup guide deal with installing pagekite and getting it to start on boot:
echo deb http://pagekite.net/pk/deb/ pagekite main | sudo tee -a /etc/apt/sources.list
sudo apt-key adv --recv-keys --keyserver keys.gnupg.net AED248B1C7B2CAC3
sudo apt-get update
sudo apt-get install pagekite
sudo leafpad /etc/pagekite.d/10_account.rc
Replace NAME.pagekite.me with the name of the kite
Replace YOURSECRET with whatever the secret is from the pagekite admin console
Remove the line “abort_not_configured” and the comment above it
sudo cp /etc/pagekite.d/80_sshd.rc.sample /etc/pagekite.d/80_sshd.rc
sudo invoke-rc.d pagekite restart
sudo reboot
This assumes you've made an account and setup a "kite"
I think you basically need a reliable reverse tunnel such as Pagekite, especially if you plan on expanding your network, as it will turn into a nightmare at a certain size. I believe I'm just going to keep a list of ssh usernames, ssh passwords, and pagekite addresses, then write a script that loops through them and rsync's my local directory with the new code to the remote directory on the pi.

Web2Py on AWS EC2 Linux

I have an instance running Linux at Amazon AWS EC2 after carefully following the instructions provided by Amazon here: Setting Up to Host a Web App on AWS.
I have set-up the security groups as mentioned in the documentation provided by Amazon.
The default security group has all traffic, all protocols, on all ports open.
In addition to the above security rule, I have setup SSH on port 22 and then, using CyberDuck (a great FTP app), I have uploaded the Web2Py source code into a folder named web2py at AWS.
After successfully FTP the source code into this web2py folder, I have SSH'ed into the AWS machine using the Terminal (on Mac locally) having the my-keys-file.pem on hand:
ssh -i my-keys-file.pem ec2-user#ec2-xx-xx-xx-xx.compute-1.amazonaws.com
(where the xx are the numbers in the Public DNS as they appear on my instance on EC2 page)
Then I have checked whether my AWS instance has python installed and it does have it.
Thus, I have proceeded to install Web2Py.
python2.6 web2py.py
password = pwd
it warns that GUI not available since Tlk library is not installed, but Massimo says here (http://comments.gmane.org/gmane.comp.python.web2py/129181) that it's not critical.
Running the Web2Py ....
If I try:
python web2py.py -a pwd -i 0.0.0.0 -p 80
It says:
there is an error with the Rocket Server with that specific port (used by another process that is not willing to share...)
If I try:
python web2py.py -a pwd
it says nothing (which begs the question: is web2py running ?) and when I try to access the web2py server
http://ec2-xx-xx-xx-xxx.compute-1.amazonaws.com/
or
https://ec2-xx-xx-xx-xxx.compute-1.amazonaws.com/admin
in both cases it says page is not available since it takes too long to access it (nothing about security cause).
If I try:
python web2py.py -a pwd -i 0.0.0.0 -p 8000
again - it says nothing (is web2py running ?)
trying to access the Web2Py server at
http://ec2-xx-xx-xx-xxx.compute-1.amazonaws.com/
or
https://ec2-xx-xx-xx-xxx.compute-1.amazonaws.com/admin
in both cases it says page is not available, same as above.
I have tried to use the IP address instead, but it is immediately translated to the amazon format of ec2-xx-xx-xx-xxx.etc...
I have tried to access web2py by explicitly mentioning the port (8000) in the address - still it doesn't work while giving no reason except page is not available
My questions:
Is there any DETAILED recipe on how to install AND run Web2Py on AWS EC2 ?
Is the web2py server running ? How can I know if it is running ? If it is not - what am I doing incorrectly ?
If the web2py server is running how can I access it ?
Any help would be much appreciated.
Thanks
I have deployed my Web2py to an EC2 instance running Ubuntu, but I guess you can adapt the same approach to your system.
The simplest way to deploy Web2py is following the 'One step production deployment' script introduced in the official Web2py book.
wget http://web2py.googlecode.com/hg/scripts/setup-web2py-ubuntu.sh
chmod +x setup-web2py-ubuntu.sh
sudo ./setup-web2py-ubuntu.sh
Running this will install and configure everything you need.
When finished, simply type your IP or domain name into a web browser and you will see the default web2py website.

Git push/pull fails on GitLab in Google Compute Engine

I've installed GitLab on Google Compute Engine using "Click to Deploy" from the project interface. The deployment is successful after a few minutes. I can SSH into the instance, and muck around with it as expected.
I can also log in to GitLab using the web interface, and add SSH keys to my profile. So far, so good. However, when I attempt to push or pull to a new example repository, I receive this message:
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I've removed my local SSH config so it doesn't interfere. Do I need to setup an SSH tunnel of some sort? What am I missing?
UPDATE: Wiping out my local ~/.ssh folder, and regenerating an SSH key (which I've added to my profile in GitLab) produces the following error:
Received disconnect from {GITLAB_IP_ADDRESS}: 2: Too many authentication failures for git
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
UPDATE 2: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
You need to create an SSH tunnel to communicate with GitLab.
1. Log into your development server as your user, and create a key.
ssh-keygen -t rsa
Follow the steps, and create a passcode (that you can remember) as you'd need this to pull and push code from/to GitLab.
2. Now that you've created your key, we can copy it;
cat id_rsa.pub
Copy the output of that command (including ssh-rsa), and add it to your GitLab profile. (http://my-gitlab-server.com/profile/keys/new).
3. Ensure you have the correct privilege to the project(s)
Ensure you are at role developer at the very least. (Screengrab of roles: http://i.stack.imgur.com/DSSvl.jpg)
4. Now, copy the project link
Go into your project, and find the SSH link in the top right;
5. Now back to your development server
Navigate to your directory where you'd like to work, and run the following;
$ git init
$ git remote add origin <<project_url>>
$ git fetch
Where <<project_url>> is the link we copied in step 4.
You will be prompted your password (this is your ssh key password, not your server password) and to add the host to your known_hosts file. After that, the project will start to download and you can enjoy development.
I did these steps on a CentOS 6.4 machine with Digital Ocean. But they shouldn't differ from using Google CE.
Edit
Quote from Marty Penner answer as per this comment
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
Solved it! Thanks to #sxleixer and #Alexander Wenzowski for figuring this out.
Apparently, SELinux was interfering with a non-standard location for the .ssh directory. I needed to run the following commands on the Compute Engine instance:
sudo yum -y install policycoreutils-python # Install the `semanage` tool
sudo semanage fcontext -a -t ssh_home_t "/var/opt/gitlab/.ssh/authorized_keys" # Allow the nonstandard ssh_home_t
See the full thread here:
Google Cloud Engine. Permission denied (publickey,gssapi-keyex,gssapi-with-mic)
UPDATE: It seems GitLab may already have a solution: run sudo gitlab-ctl reconfigure. See here: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md#git-ssh-access-stops-working-on-selinux-enabled-systems
In my situation the git user wasn´t set up completely. If you get in your log files messages like "User git not allowed because account is locked" (Under Centos or Redhat it´s /var/log/secure) than you simply need to activate the user via "passwd -d git"

DHCP overwrites Cisco VPN resolv.conf on Linux

I'm using an Ubuntu 8.04 (x86_64) machine to connect to my employer's Cisco VPN. (The client didn't compile out of the box, but I found patches to update the client to compile on kernels released in the last two years.) This all works great, until my DHCP client decides to renew its lease and updates /etc/resolv.conf, replacing the VPN-specific name servers with my general network servers.
Is there a good way to prevent my DHCP client from updating /etc/resolv.conf while my VPN is active?
If you are running without NetworkManager handling the connections, use the resolvconf package to act as an intermediary to programs tweaking /etc/resolv.conf: sudo apt-get install resolvconf
If you are using NetworkManager it will handle this for you, so get rid of the resolvconf package: sudo apt-get remove resolvconf
I found out about this when setting up vpnc on Ubuntu last week. A search for vpn resolv.conf on ubuntuforums.org has 250 results, many of which are very related!
If you are using the Ubuntu default with NetworkManager, try removing the CiscoVPN client and use the NetworkManager vpnc plugin to connect to the Cisco VPN. This should avoid all problems, since NetworkManager then knows about your VPN connection.
I would advice following the advice from #Sean, but if that fails for whatever reason, it should be possible to configure dhclient to not request DNS servers in /etc/dhcp3/dhclient.conf
chattr +i /etc/resolv.conf should work. ( -i to undo )
But the better thing is to configure your dhclient.conf:
https://calomel.org/dhclient.html
Look at superceding domain-name-servers, and domain-name.
Also look at "send hostname;"
If it works at your work place, you will have a cool hostname for your PC and not some weird name that DHCP servers assign.
vpnc seems to be doing the right thing for my employer's cisco concentrator. I jump on and off the vpn, and it seems to update everything smoothly.
The DHCPclient daemon can be told not to update resolv.conf with a command line switch. (-r I think, depending on the client)
That's less dynamic, because you'd have to restart/reconfigure DHCP when you connect, but not too hard. Similarly, you could just stop the service, but you might lose your IP in the meantime, so I wouldn't really recommend that.
Alternatively, you could run the dhcpclient from within a cron job, adding the appropriate process checks.
This problem is much more noticeable on networks with low DHCP lease ages. There is a bug filed in Ubuntu's dhcp3 package launchpad:
https://bugs.launchpad.net/ubuntu/+source/dhcp3/+bug/90681
Which includes this patch in the description:
--- /sbin/dhclient-script.orig 2007-03-08 19:19:56.000000000 +0000
+++ /sbin/dhclient-script 2007-03-08 19:19:46.000000000 +0000
## -13,6 +13,10 ##
# The alias handling in here probably still sucks. -mdz
make_resolv_conf() {
+ # don't overwrite resolv.conf at RENEW time, since a VPN/PPTP tunnel may
+ # have updated it with remote DNS servers
+ [ "$reason" = "RENEW" ] && return
+
if [ -n "$new_domain_name" -o -n "$new_domain_name_servers" ]; then
# Find out whether we are going to mount / rw
exec 9>&0 </etc/fstab
This change to /sbin/dhcp-script stops DHCP client from overwriting /etc/resolv.conf when it renews its lease.

Resources