ping cassandra on virtualbox guest from windows host - linux

I'm using virtual box to act as Linux guest to Cassandra DB, and I'm trying to access it through my windows host, however, i don't know what are the right configurations to do that.
on virtual box I'm using "host only networking" to communicate from windows.
Anyone knows how to do these configurations?

Maybe, it's the network configuration of the guest.
In VirtualBox environment, if you want communicate to the guest from the host, the network type of the VM must be "bridged networking" or "host only networking".
You can find more information here : https://www.virtualbox.org/manual/ch06.html.

Access Cassandra on Guest VM from Host OS
For future reference to myself and others, this worked for me for Cassandra v3.10:
http://grokbase.com/t/cassandra/user/14cpyy7bt8/connect-to-c-instance-inside-virtualbox
Once your Guest VM is provisioned with Cassandra, I had a host only network adapter set with IP 192.168.5.10.
Then had to modify /etc/cassandra/cassandra.yaml to set:
From
rpc_address: localhost
To
rpc_address: 192.168.5.10
Then run sudo service cassandra restart and give it 15 seconds...
Then on the guest VM or on the host the following worked:
cqlsh 192.168.5.10
Hope that helps someone.
Vagrantfile for reference
Note it doesn't work for multiple nodes in a cluster yet
# Adjustable settings
## Cassandra cluster settings
mem_mb = "3000"
cpu_count = "2"
server_count = 1
network = '192.168.5.'
first_ip = 10
servers = []
seeds = []
cassandra_tokens = []
(0..server_count-1).each do |i|
name = 'cassandra-node' + (i + 1).to_s
ip = network + (first_ip + i).to_s
seeds << ip
servers << {'name' => name,
'ip' => ip,
'provision_script' => "sleep 15; sudo sed -i -e 's/^rpc_address: localhost/rpc_address: #{ip}/g' /etc/cassandra/cassandra.yaml; sudo service cassandra restart;",
'initial_token' => 2**127 / server_count * i}
end
# Configure VM server
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu/xenial64"
servers.each do |server|
config.vm.define server['name'] do |x|
x.vm.provider :virtualbox do |v|
v.name = server['name']
v.customize ["modifyvm", :id, "--memory", mem_mb]
v.customize ["modifyvm", :id, "--cpus" , cpu_count ]
end
x.vm.network :private_network, ip: server['ip']
x.vm.hostname = server['name']
x.vm.provision "shell", path: "provision.sh"
x.vm.provision "shell", inline: server['provision_script']
end
end
end
provision.sh
# install Java and a few base packages
add-apt-repository ppa:openjdk-r/ppa
apt-get update
apt-get install vim curl zip unzip git python-pip -y -q
# Java install - adjust if needed
# apt-get install openjdk-7-jdk -y -q
apt-get install openjdk-8-jdk -y -q
# Install Cassandra
echo "deb http://www.apache.org/dist/cassandra/debian 310x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
curl https://www.apache.org/dist/cassandra/KEYS | sudo apt-key add -
sudo apt-get update
sudo apt-get install cassandra -y
sudo service cassandra start

So you are trying to connect to Cassandra from Linux guest in your virtualbox? Or is it the other way around?
Anyways, whatever the direction make sure that you IP is reachable and that the Cassandra ports are open (start with 9042).

Related

Setting up a Remotely Accessible Postgres Database with Linux and PGAdmin

I'm trying to set up a remotely accessible Postgres database. I want to host this databse on one Linux based device (HOST), and to access it on another Linux based device (CLIENT).
In my specific case, HOST is a desktop device running Ubuntu. CLIENT is a Chromebook with a Linux virtual system. (I know. But it's the closest thing to a Linux based device that I have to hand.
Steps Already Taken to Set Up the Database
Installed the required software on HOST using APT.
PGP_KEY_URL="https://www.postgresql.org/media/keys/ACCC4CF8.asc"
POSTGRES_URL_STEM="http://apt.postgresql.org/pub/repos/apt/"
POSTGRES_URL="$POSTGRES_URL_STEM `lsb_release -cs`-pgdg main"
POSTGRES_VERSION="12"
PGADMIN_URL_SHORT="https://www.pgadmin.org/static/packages_pgadmin_org.pub"
PGADMIN_URL_STEM="https://ftp.postgresql.org/pub/pgadmin/pgadmin4/apt"
PGADMIN_TO_ECHO="deb $PGADMIN_URL_STEM/`lsb_release -cs` pgadmin4 main"
PGADMIN_PATH="/etc/apt/sources.list.d/pgadmin4.list"
sudo apt install curl --yes
sudo apt install gnupg2 --yes
wget --quiet -O - $PGP_KEY_URL | sudo apt-key add -
echo "deb $POSTGRES_URL" | sudo tee /etc/apt/sources.list.d/pgdg.list
sudo apt install postgresql-$POSTGRES_VERSION --yes
sudo apt install postgresql-client-$POSTGRES_VERSION --yes
sudo curl $PGADMIN_URL_SHORT | sudo apt-key add
sudo sh -c "echo \"$PGADMIN_TO_ECHO\" > $PGADMIN_PATH && apt update"
sudo apt update
sudo apt install pgadmin4 --yes
Create a new Postgres user.
NU_USERNAME="my_user"
NU_PASSWORD="guest"
NU_QUERY="CREATE USER $NU_USERNAME WITH superuser password '$NU_PASSWORD';"
sudo -u postgres psql -c "$NU_QUERY"
Created the new server and database. I did this manually, using the PGAdmin GUI.
Added test data, a table with a couple of records. I did this with a script.
Followed the steps given in this answer to make the databse remotely accessible.
Steps Already Taken to Connect to the Database REMOTELY
Installed PGAdmin on CLIENT.
Attempted to connect using PGAdmin. I used the "New Server" wizard, and entered:
Host IP Address: 192.168.1.255
Port: 5432 (same as when I set up the database on HOST)
User: my_user
Password: guest
However, when I try to save the connection, PGAdmin responds after a few seconds saying that the connection has timed out.
You have to configure listen_addresses in /var/lib/pgsql/data/postgresql.conf like this:
listen_addresses = '*'
Next make sure your firewall doesn't block the connection by checking if telnet can connect to your server:
$ telnet 192.168.1.255 5432
Connected to 192.168.1.255.
Escape character is '^]'.
If you see Connected network connectivity is ok. Next you have to configure access rights for remote hosts.

vagrant freezes after a while

Using Vagrant for development, the VM freezes after a while working with it. I have to reload the box in order to be able to work with it again. The file is very simple and straightforward:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/xenial32"
config.vm.network "forwarded_port", guest: 80, host: 8080
config.vm.network :private_network, ip: "192.168.68.8"
config.vm.provider "virtualbox" do |v|
v.memory = 8192
v.cpus = 2
end
config.vm.synced_folder "./", "/var/www/html", owner: "www-data", group: "www-data"
config.ssh.insert_key = false
config.vm.provision :shell, path: "config/vagrant/bootstrap.sh"
end
and config/vagrant/bootstrap.sh looks like:
#!/usr/bin/env bash
# Variables
DBNAME=dbname
DBUSER=dbuser
DBPASSWD=dbpassword
apt-get -y install software-properties-common
add-apt-repository -y ppa:ondrej/php
apt-get update
apt-get update
debconf-set-selections <<< "mysql-server mysql-server/root_password password $DBPASSWD"
debconf-set-selections <<< "mysql-server mysql-server/root_password_again password $DBPASSWD"
apt-get -y install mysql-server
sed -i "s/.*bind-address.*/bind-address = 0.0.0.0/" /etc/mysql/mysql.conf.d/mysqld.cnf
mysql -uroot -p$DBPASSWD -e "CREATE DATABASE $DBNAME"
mysql -uroot -p$DBPASSWD -e "grant all privileges on $DBNAME.* to '$DBUSER'#'%' identified by '$DBPASSWD'"
mysql -uroot -p$DBPASSWD -e "flush privileges"
sudo apt-get -y install apache2 php7.4 php7.4-mysql php7.4-mbstring php7.4-dom php7.4-sqlite php7.4-zip php7.4-curl php7.4-intl
sudo apt-get -y install curl composer zip unzip
sudo phpenmod pdo_mysql
sudo service apache2 restart
sudo service mysql restart
I read a blog post that suggests removing config.ssh.insert_key = false as a solution but this does not work.
Any ideas?
EDIT
there is no output. It works as expected for approximately 15 minutes before the entire box freezes and stops responding (no response to shell input as well as no vagrant ssh).
Ok, I think I found the issue.
I have changed this line:
v.cpus = 2
to
v.cpus = 1
and I had no hang ups the whole day.

Ubuntu Focal headless setup on Raspberry pi 4 - cloud init wifi initialisation before first reboot

i'm having trouble in setting up a full headless install for Ubuntu Server Focal (ARM) on a Raspberry pi 4 using cloud init config. The whole purpose of doing this is to simplify the SD card swap in case of failure. I'm trying to use cloud-init config files to apply static config for lan/wlan, create new user, add ssh authorized keys for the new user, install docker etc. However, whatever i do it seems the Wifi settings are not applied before the first reboot.
Step1: burn the image on SD Card.
Step2: rewrite SD card system-boot/network_config and system-boot/user-data with config files
network-config
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: false
optional: true
addresses: [192.168.100.8/24]
gateway4: 192.168.100.2
nameservers:
addresses: [192.168.100.2, 8.8.8.8]
wifis:
wlan0:
optional: true
access-points:
"AP-NAME":
password: "AP-Password"
dhcp4: false
addresses: [192.168.100.13/24]
gateway4: 192.168.100.2
nameservers:
#search: [mydomain, otherdomain]
addresses: [192.168.100.2, 8.8.8.8]
user-data
chpasswd:
expire: true
list:
- ubuntu:ubuntu
# Enable password authentication with the SSH daemon
ssh_pwauth: true
groups:
- myuser
- docker
users:
- default
- name: myuser
gecos: My Name
primary_group: myuser
groups: sudo
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa AAAA....
lock_passwd: false
passwd: $6$rounds=4096$7uRxBCbz9$SPdYdqd...
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
- git
runcmd:
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
- add-apt-repository "deb [arch=arm64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- apt-get update -y
- apt-get install -y docker-ce docker-ce-cli containerd.io
- systemctl start docker
- systemctl enable docker
## TODO: add git deployment and configure folders
power_state:
mode: reboot
During the first boot cloud-init always applies the fallback network config.
I also tried to apply the headless config for wifi as described here.
Created wpa_supplicant.conf and copied it to SD system-boot folder.
trl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=RO
network={
ssid="AP-NAME"
psk="AP-Password"
}
Also created an empty ssh file and copied it to system-boot
The run commands always fail since during the first boot cloud-init applies the fallback network config. After reboot, lan/wlan settings are applied, the user is created, ssh authorized keys added. However i still need to ssh into the PI and install install the remaining packages: docker etc, and i wanted to avoid this. Am i doing something wrong?
I'm not sure if you ever found a workaround, but I'll share some information I found when researching options.
Ubuntu's Raspberry Pi WiFi Setup Page states the need for a reboot when using network-config with WiFi:
Note: During the first boot, your Raspberry Pi will try to connect to this network. It will fail the first time around. Simply reboot sudo reboot and it will work.
There's an interesting workaround & approach in this repo.
It states it was created for 18.04, but it should work with 20.04 as both Server versions use netplan and systemd-networkd.
Personally, I've gone a different route.
I create custom images that contain my settings & packages, then burn to uSD or share via a TFTP server. I was surprised at how easy this was.
There's a good post on creating custom images here
Some important additional info is here

How to share (config.vm.synced_folder), directories between Windows 10 and CentOS7 Virtual Machine created using Vagrant and VirtualBox

I'm trying to create a VM CentOS7 using Vagrant (2.2.3) and Virtual Box (6.0.4), on Windows 10 using the following Vagrant file
Vagrant.configure("2") do |config|
config.vm.box = "bento/centos-7"
config.vm.network "private_network", ip: "192.168.56.3"
config.vm.synced_folder "D://SharedWithVM//CentOS7-Work", "/media/sf_CentOS7-Work", type: "virtualbox"
config.vm.provider "virtualbox" do |vb|
vb.name = "Test"
end
config.vm.provision "shell", path: "./scripts/InstallGuestAdditions.sh"
end
and the InstallGuestAdditions.sh shell script is the follow ..
#!/bin/bash
curl -C - -O http://download.virtualbox.org/virtualbox/6.0.4/VBoxGuestAdditions_6.0.4.iso
sudo mkdir /media/VBoxGuestAdditions
sudo mount -o loop,ro VBoxGuestAdditions_6.0.4.iso /media/VBoxGuestAdditions
sudo sh /media/VBoxGuestAdditions/VBoxLinuxAdditions.run
rm VBoxGuestAdditions_6.0.4.iso
sudo umount /media/VBoxGuestAdditions
sudo rmdir /media/VBoxGuestAdditions
All works fine and the CentOS7 VM is created.
If I check the machine properties about shared directories I can see this
So I'm quite surprised about this path \\?\D:\SharedWithVM\CentOS7-Work.
How should I change my Vagrantfile to obtain a right path?
I've tried to connect at my CentOS 7 VM using vagrant ssh command and all works. Also the command cd /media/sf_CentOS7-Work works fine but no file or directory can be listed or shared between the two systems.
I've tried to create files or directories in Windows 10 and also in CentOS7 VM.
Any suggestion or example will be appreciated.

Provision Vagrant Linux VM with another Vagrant Linux VM running Ansible

I know Ansible has issues running on windows. Which is why, I want to avoid using it for my host. I want to provision a local linux vm running in VirtualBox.
I was wondering if anyone can tell me if it is possible, to use vagrant to bring up two independent VMs on the same box. Then install Ansible on one of those VMs, then using SSH log into that VM. From there, use the Linux VM with Ansible as the host, to provision another Linux VM, that was created via the windows host machine. So, this is not a VM inside a VM. It is just two VMs running on windows using vagrant, then SSH to one of those VMs to use Ansible to provision the other VM.
Steps:
Vagrant VM 1 and install Ansible
Vangrant VM 2
SSH to VM 1
Use Ansible to provision VM 2 using VM 1.
Can that be done? Sorry if that sounded confusing.
There is now a new Ansible local provisioner in Vagrant 1.8.0, which you can use in your scenario.
Especially, look at "Tips and Tricks" section of the documentation, there is an exact solution (which worked for me).
Below is my Vagrantfile for this scenario (slightly different from the one in the documentation), which also solves potential problems with the ssh permissions and "executable" inventory file (if you're using Cygwin):
Vagrant.configure(2) do |config|
config.vm.synced_folder "./", "/vagrant",
owner: "vagrant",
mount_options: ["dmode=775,fmode=600"]
config.vm.define "vm2" do |machine|
machine.vm.box = "box-cutter/ubuntu1404-desktop"
machine.vm.network "private_network", ip: "172.17.177.21"
end
config.vm.define 'vm1' do |machine|
machine.vm.box = "ubuntu/trusty64"
machine.vm.network "private_network", ip: "172.17.177.11"
machine.vm.provision :ansible_local do |ansible|
ansible.provisioning_path = "/vagrant"
ansible.playbook = "provisioning/playbook.yml"
ansible.limit = "vm2"
ansible.inventory_path = "inventory"
ansible.verbose = "vvv"
ansible.install = true
end
end
end
and inventory file:
vm1 ansible_connection=local
vm2 ansible_ssh_host=172.17.177.21 ansible_ssh_private_key_file=/vagrant/.vagrant/machines/vm2/virtualbox/private_key
In order to provision a box you don't necessary need to do it using another box, in this windows scenario you could simply write your playbooks, share it to your guest and hit it with ansible-playbook using shell provisioning.
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
$script = <<SCRIPT
sudo apt-get install -y software-properties-common
sudo apt-add-repository -y ppa:ansible/ansible
sudo apt-get update
sudo apt-get install -y ansible
ansible-playbook /home/vagrant/provisioning/playbook.yml
SCRIPT
config.vm.synced_folder "./provisioning", "/home/vagrant/provisioning"
config.vm.provision "shell", inline: $script
end
The first lines will get ansible on your box then it will target the playbook that you have shared to your box and run the playbooks.
This is an example, I once used this approach to provision my working vagrant box, hope this idea can help you.

Resources