I'm trying to set up a remotely accessible Postgres database. I want to host this databse on one Linux based device (HOST), and to access it on another Linux based device (CLIENT).
In my specific case, HOST is a desktop device running Ubuntu. CLIENT is a Chromebook with a Linux virtual system. (I know. But it's the closest thing to a Linux based device that I have to hand.
Steps Already Taken to Set Up the Database
Installed the required software on HOST using APT.
PGP_KEY_URL="https://www.postgresql.org/media/keys/ACCC4CF8.asc"
POSTGRES_URL_STEM="http://apt.postgresql.org/pub/repos/apt/"
POSTGRES_URL="$POSTGRES_URL_STEM `lsb_release -cs`-pgdg main"
POSTGRES_VERSION="12"
PGADMIN_URL_SHORT="https://www.pgadmin.org/static/packages_pgadmin_org.pub"
PGADMIN_URL_STEM="https://ftp.postgresql.org/pub/pgadmin/pgadmin4/apt"
PGADMIN_TO_ECHO="deb $PGADMIN_URL_STEM/`lsb_release -cs` pgadmin4 main"
PGADMIN_PATH="/etc/apt/sources.list.d/pgadmin4.list"
sudo apt install curl --yes
sudo apt install gnupg2 --yes
wget --quiet -O - $PGP_KEY_URL | sudo apt-key add -
echo "deb $POSTGRES_URL" | sudo tee /etc/apt/sources.list.d/pgdg.list
sudo apt install postgresql-$POSTGRES_VERSION --yes
sudo apt install postgresql-client-$POSTGRES_VERSION --yes
sudo curl $PGADMIN_URL_SHORT | sudo apt-key add
sudo sh -c "echo \"$PGADMIN_TO_ECHO\" > $PGADMIN_PATH && apt update"
sudo apt update
sudo apt install pgadmin4 --yes
Create a new Postgres user.
NU_USERNAME="my_user"
NU_PASSWORD="guest"
NU_QUERY="CREATE USER $NU_USERNAME WITH superuser password '$NU_PASSWORD';"
sudo -u postgres psql -c "$NU_QUERY"
Created the new server and database. I did this manually, using the PGAdmin GUI.
Added test data, a table with a couple of records. I did this with a script.
Followed the steps given in this answer to make the databse remotely accessible.
Steps Already Taken to Connect to the Database REMOTELY
Installed PGAdmin on CLIENT.
Attempted to connect using PGAdmin. I used the "New Server" wizard, and entered:
Host IP Address: 192.168.1.255
Port: 5432 (same as when I set up the database on HOST)
User: my_user
Password: guest
However, when I try to save the connection, PGAdmin responds after a few seconds saying that the connection has timed out.
You have to configure listen_addresses in /var/lib/pgsql/data/postgresql.conf like this:
listen_addresses = '*'
Next make sure your firewall doesn't block the connection by checking if telnet can connect to your server:
$ telnet 192.168.1.255 5432
Connected to 192.168.1.255.
Escape character is '^]'.
If you see Connected network connectivity is ok. Next you have to configure access rights for remote hosts.
Related
I want to call ssh via GitLab and push changes, I already have a working structure but I want to add my server to VPN server and only accessible from VPN server IP Address
Do you have something to add?
I have added this
- which openvpn || (apt-get update -y -qq && apt-get install -y -qq openvpn)
- cat <<< $GITLAB_PUSH_OPENVPN > /etc/openvpn/client.conf
- cat <<< "log /etc/openvpn/client.log" >> /etc/openvpn/client.conf
- echo "I'm going to start OPENVPN connection. Please wait. Timeout 30s."
- openvpn --config /etc/openvpn/client.conf --daemon
- sleep 30s
- echo "Giving some info after daemon is getting started."
- cat /etc/openvpn/client.log
- ping -c 1 1.1.1.1
- echo "Importing VPN has been successful."
I have the $GITLAB_PUSH_OPENVPN (openvpn client .opvn config) variable with enabled gateway redirect for allowing to connect to the internet
I got a successful connection, but then no access to the internet, and can't access my server. I have tried the same file for my windows OpenVPN connect client, I don't have any issues.
Regards
i'm having trouble in setting up a full headless install for Ubuntu Server Focal (ARM) on a Raspberry pi 4 using cloud init config. The whole purpose of doing this is to simplify the SD card swap in case of failure. I'm trying to use cloud-init config files to apply static config for lan/wlan, create new user, add ssh authorized keys for the new user, install docker etc. However, whatever i do it seems the Wifi settings are not applied before the first reboot.
Step1: burn the image on SD Card.
Step2: rewrite SD card system-boot/network_config and system-boot/user-data with config files
network-config
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: false
optional: true
addresses: [192.168.100.8/24]
gateway4: 192.168.100.2
nameservers:
addresses: [192.168.100.2, 8.8.8.8]
wifis:
wlan0:
optional: true
access-points:
"AP-NAME":
password: "AP-Password"
dhcp4: false
addresses: [192.168.100.13/24]
gateway4: 192.168.100.2
nameservers:
#search: [mydomain, otherdomain]
addresses: [192.168.100.2, 8.8.8.8]
user-data
chpasswd:
expire: true
list:
- ubuntu:ubuntu
# Enable password authentication with the SSH daemon
ssh_pwauth: true
groups:
- myuser
- docker
users:
- default
- name: myuser
gecos: My Name
primary_group: myuser
groups: sudo
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa AAAA....
lock_passwd: false
passwd: $6$rounds=4096$7uRxBCbz9$SPdYdqd...
packages:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
- git
runcmd:
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
- add-apt-repository "deb [arch=arm64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- apt-get update -y
- apt-get install -y docker-ce docker-ce-cli containerd.io
- systemctl start docker
- systemctl enable docker
## TODO: add git deployment and configure folders
power_state:
mode: reboot
During the first boot cloud-init always applies the fallback network config.
I also tried to apply the headless config for wifi as described here.
Created wpa_supplicant.conf and copied it to SD system-boot folder.
trl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=RO
network={
ssid="AP-NAME"
psk="AP-Password"
}
Also created an empty ssh file and copied it to system-boot
The run commands always fail since during the first boot cloud-init applies the fallback network config. After reboot, lan/wlan settings are applied, the user is created, ssh authorized keys added. However i still need to ssh into the PI and install install the remaining packages: docker etc, and i wanted to avoid this. Am i doing something wrong?
I'm not sure if you ever found a workaround, but I'll share some information I found when researching options.
Ubuntu's Raspberry Pi WiFi Setup Page states the need for a reboot when using network-config with WiFi:
Note: During the first boot, your Raspberry Pi will try to connect to this network. It will fail the first time around. Simply reboot sudo reboot and it will work.
There's an interesting workaround & approach in this repo.
It states it was created for 18.04, but it should work with 20.04 as both Server versions use netplan and systemd-networkd.
Personally, I've gone a different route.
I create custom images that contain my settings & packages, then burn to uSD or share via a TFTP server. I was surprised at how easy this was.
There's a good post on creating custom images here
Some important additional info is here
I am new to Hyperledger fabric.
I was able to use one tutorial to:
install prerequisites and hyperledger composer development tools
create a fabric network
install/deploy business network
create an angular front end
However, the fabric network that got created has only one organization and a peer. For my POC, I need three organizations with one peer each.
How can I add additional organizations and peers in existing fabric network?
Steps
A) Install prerequisites
(Run in dir - dev5#ubuntu:~$)
1) You can start by updating and upgrading the package manager
sudo apt-get update
sudo dpkg --configure -a
2 Install curl
sudo apt-get install curl
3 Check curl version
curl --version
4 Install Go Language
$ cd $HOME/
wget https://storage.googleapis.com/golang/go1.8.1.linux-amd64.tar.gz
$ tar -xvf go1.8.1.linux-amd64.tar.gz
$ mkdir $HOME/gopath
$ export GOPATH=$HOME/gopath
$ export GOROOT=$HOME/go
$ export PATH=$PATH:$GOROOT/bin
$ go version
5 Download the prerequisites file using the following commands \
curl -O https://hyperledger.github.io/composer/latest/prereqs-ubuntu.sh
6 Install libltdl-dev
apt-get install libltdl-dev
7 Open preres-ubuntu.sh file for reference. Get following commands from the file. Check if they match. If no, then use the one in the file.
8 This command is at string "Array of supported versions". Run it
declare -a versions=('trusty' 'xenial' 'yakkety', ‘bionic’);
9 Update the CODENAME var that is used in future
if [ -z "$1" ]; then
source /etc/lsb-release || \
(echo "Error: Release information not found, run script passing Ubuntu version codename as a parameter"; exit 1)
CODENAME=${DISTRIB_CODENAME}
else
CODENAME=${1}
fi
10 Check if version is supported
if echo ${versions[#]} | grep -q -w ${CODENAME}; then
echo "Installing Hyperledger Composer prereqs for Ubuntu ${CODENAME}"
else
echo "Error: Ubuntu ${CODENAME} is not supported"
exit 1
fi
11 Update the package manager
sudo apt-get update
12 Install Git
sudo apt-get install -y git
13 Install nvm dependencies
sudo apt-get -y install build-essential libssl-dev
14 Execute nvm installation script
curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh | bash
15 Set up nvm environment without restarting the shell
export NVM_DIR="${HOME}/.nvm"
[ -s "${NVM_DIR}/nvm.sh" ] && . "${NVM_DIR}/nvm.sh"
[ -s "${NVM_DIR}/bash_completion" ] && . "${NVM_DIR}/bash_completion"
16 Install node
nvm install --lts
17 Configure nvm to use version 6.9.5
nvm use --lts
nvm alias default 'lts/*'
18 Install the latest version of npm
npm install npm#latest -g
19 Add Docker repository key to APT keychain
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
20 Update where APT will search for Docker Packages
echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu
${CODENAME} stable" | \
sudo tee /etc/apt/sources.list.d/docker.list
21 Update package lists
sudo apt-get update
22 Verifies APT is pulling from the correct Repository
sudo apt-cache policy docker-ce
23 Install Docker
sudo apt-get -y install docker-ce
24 Install docker compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.13.0/docker-compose-$(uname -s)-$(uname -m)" \
-o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
25 Install python v2 if required
set +e
COUNT="$(python -V 2>&1 | grep -c 2.)"
if [ ${COUNT} -ne 1 ]
then
sudo apt-get install -y python-minimal
fi
26 Install unzip, required to install hyperledger fabric.
sudo apt-get -y install unzip
27 Upgrade docker-compose as >= 1.18 is needed
sudo curl -L https://github.com/docker/compose/releases/download/1.18.0/docker- compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
28 Clone the fabric-samples from the github
git clone https://github.com/mahoney1/fabric-samples.git
B Install hyperledger composer development tools
(Run in dir - dev5#ubuntu:~$ )
1 Install the CLI tools -
composer-cli npm install -g composer-cli
composer-rest-server npm install -g composer-rest-server
generator-hyperledger-composer npm install -g generator-hyperledger-composer
Yeoman npm install -g yo
2 Set up your IDE
https://code.visualstudio.com/download
Open VSCode, go to Extensions, then search for and install the Hyperledger Composer extension from the Marketplace.
C Run fabric network
(Run in dir - dev5#ubuntu:~$ fabric-samples)
1 Change directory to fabric-samples
cd fabric-samples
2 Download the platform binaries, including cryptogen using this command (three parameters needed for the bash command):
3 To work with current fabric level, run this command
git checkout multi-org
4 Check downloaded binaries. Change directory to bin
cd bin
ls
5 Change to first-network directory
cd ../
cd first-network
ls
6 Generate the required certificates and articates for your first network
./byfn.sh -m generate
7 Start the fabric
sudo ./byfn.sh -m up -s couchdb -a
On error - "Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?"
Check if docker is active
sudo systemctl is-active docker
If not active, then
sudo docker images
sudo usermod -aG docker $(whoami)
sudo usermod -a -G docker $USER
sudo docker --version
To start docker, run
sudo service docker restart
8 Start the fabric again
$ sudo ./byfn.sh -m up -s couchdb -a
9 If still the network fails to start, then restart the channel
sudo ./byfn.sh -m restart -c mychannel
D Run fabric network
*Organization Org1 is represented by Alice
Organization Org2 is represented by Bob*
1 Create a temporary working directory (and subdirectories) to manage the Composer connection profiles and key/certificate files
mkdir -p /tmp/composer/org1
mkdir -p /tmp/composer/org2
2 Create a base connection profile that describes this fabric network that can be given to
Alice and Bob
Go to /tmp/composer
cd /
cd tmp
cd composer
Open editor and copy paste the contents of byfn-network.json sheet in the editor and save it as byfn-network.json
nano
3 Open byfn-network.json and replace all instances of the text INSERT_ORG1_CA_CERT with the CA certificate for the peer nodes for Org1
[Run in dir - dev5#ubuntu:~$ fabric-samples/first-network]
3.1 Run the command and get the certificate from the generated .pem file so that it can be embedded into the above connection profile
3.11 Go to first network folder
cd /
cd home/dev5
cd fabric-samples/first-network
3.12 Execute the command to generate /tmp/composer/org1/ca-org1.txt
awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt > /tmp/composer/org1/ca-org1.txt
3.13 Open ca-org1
3.14 Copy the contents of the file /tmp/composer/org1/ca-org1.txt and replace the text INSERT_ORG1_CA_CERT in the .json file
4 In the same .json file - you need to replace all instances of the text INSERT_ORG2_CA_CERT with the CA certificate for the peer nodes for Org2
4.1 Run the command and get the certificate from the generated .pem file so that it can be embedded into the above connection profile
4.11 Execute the command to generate /tmp/composer/org1/ca-org2.txt
awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt > /tmp/composer/org2/ca-org2.txt
4.12 Open ca-org2
4.13 Copy the contents of the file /tmp/composer/org1/ca-org2.txt and replace the text
INSERT_ORG2_CA_CERT in the .json file
5 Replace all instances of the text
INSERT_ORDERER_CA_CERT with the CA certificate for the orderer node
5.1 Run the command and get the certificate from the generated .pem file so that it can be embedded into the above connection profile
5.11 Execute the command to generate /tmp/composer/org1/ca-orderer.txt
awk 'NF {sub(/\r/, ""); printf "%s\\n",$0;}' crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/ca.crt > /tmp/composer/ca-orderer.txt
5.12 Open ca-orderer.txt
5.13 Copy the contents of the file /tmp/composer/org1/ca-orderer.txt and replace the text INSERT_ORDERER_CA_CERT in the .json file
6 Save this file as /tmp/composer/byfn-network.json
This connection profile now describes the fabric network setup, all the peers, orderers and certificate authorities that are part of the network, it defines all the organizations that are participating in the network and also defines the channel's on this network. {{site.data.conrefs.composer_full}} can only interact with a single channel so only one channel should be defined.
7 Customize the connection profile for Org1
In the connection profile /tmp/composer/byfn-network.json between the version property and just before the channel property, add this block that specifies the organization that alice belongs to, in a client section with optional timeouts. Save the connection profile file as a NEW file called byfn-network-org1.json in /tmp/composer/org1/
"client": {
"organization": "Org1",
"connection": {
"timeout": {
"peer": {
"endorser": "300",
"eventHub": "300",
"eventReg": "300"
},
"orderer": "300"
}
}
},
8 Customize the connection profile for Org2
In the connection profile /tmp/composer/byfn-network.json between the version property and just before the channel property, add this block that specifies the organization that bob belongs to, in a client section with optional timeouts. Save the connection profile file as a NEW file called byfn-network-org2.json in /tmp/composer/org2/
"client": {
"organization": "Org2",
"connection": {
"timeout": {
"peer": {
"endorser": "300",
"eventHub": "300",
"eventReg": "300"
},
"orderer": "300"
}
}
},
9 Copy the certificate and private key to /tmp/composer/org1 for Org1
export ORG1=crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
cp -p $ORG1/signcerts/A*.pem /tmp/composer/org1
cp -p $ORG1/keystore/*_sk /tmp/composer/org1
10 Copy the certificate and private key to /tmp/composer/org2 for Org2
export ORG2=crypto-config/peerOrganizations/org2.example.com/users/Admin#org2.example.com/msp
cp -p $ORG2/signcerts/A*.pem /tmp/composer/org2
cp -p $ORG2/keystore/*_sk /tmp/composer/org2
11 Creating business network cards for the administrator for Org1
composer card create -p /tmp/composer/org1/byfn-network-org1.json -u PeerAdmin -c /tmp/composer/org1/Admin#org1.example.com-cert.pem -k /tmp/composer/org1/*_sk -r PeerAdmin -r ChannelAdmin -f PeerAdmin#byfn-network-org1.card
12 Creating business network cards for the administrator for Org2
composer card create -p /tmp/composer/org2/byfn-network-org2.json -u PeerAdmin -c /tmp/composer/org2/Admin#org2.example.com-cert.pem -k /tmp/composer/org2/*_sk -r PeerAdmin -r ChannelAdmin -f PeerAdmin#byfn-network-org2.card
13 Import the business network cards for the administrator for Org1
composer card import -f PeerAdmin#byfn-network-org1.card --card PeerAdmin#byfn-network-org1
14 Import the business network cards for the administrator for Org2
composer card import -f PeerAdmin#byfn-network-org2.card --card PeerAdmin#byfn-network-org2
15 Create business network archive file for the desired business network
16 Install the business network onto the peer nodes for Org1
composer network install --card PeerAdmin#byfn-network-org1 --archiveFile trade-network.bna
17 Install the business network onto the peer nodes for Org2
composer network install --card PeerAdmin#byfn-network-org2 --archiveFile fta-fab-net.bna
18 Define the endorsement policy for the business network
Create an endorsement policy file using content of sheet endorsement-policy.json and save it in /tmp/composer/ with name endorsement-policy.json
The endorsement policy you have just created states that both Org1 and Org2 must endorse transactions in the business network before they can be committed to the blockchain. If Org1 or Org2 do not endorse transactions, or disagree on the result of a transaction, then the transaction will be rejected by the business network.
19 Retrieve business network administrator certificates for Org1
Run the composer identity request command to retrieve certificates for Alice to use as the business network administrator for Org1
composer identity request -c PeerAdmin#byfn-network-org1 -u admin -s adminpw -d alice
20 Retrieve business network administrator certificates for Org2
Run the composer identity request command to retrieve certificates for Bob to use as the business network administrator for Org2
composer identity request -c PeerAdmin#byfn-network-org2 -u admin -s adminpw -d bob
21 Start the business network
composer network start -c PeerAdmin#byfn-network-org1 -n fta-fab-net -V 0.1.14 -o endorsementPolicyFile=/tmp/composer/endorsement-policy.json -A alice -C alice/admin-pub.pem -A bob -C bob/admin-pub.pem
Note: the version number of bna file should be used in this command
If the command fails then check the docker, start the fabric n/w, install the bna file, check if tmp/compser is present
Once the business network is started. both Alice and Bob will be able to access the business network, start to set up the business network, and onboard other participants from their respective organizations.
Alice and Bob must create new business network cards with the certificates that they created in the previous steps so that they can access the business network.
22 Creating a business network card to access the business network as Org1
Create a business n/w card
composer card create -p /tmp/composer/org1/byfn-network-org1.json -u alice -n fta-fab-net -c alice/admin-pub.pem -k alice/admin-priv.pem
Import the business network card
composer card import -f alice#fta-fab-net.card
Test the connection to the blockchain business network
composer network ping -c alice#fta-fab-net
23 Creating a business network card to access the business network as Org2
Create a business n/w card
composer card create -p /tmp/composer/org2/byfn-network-org2.json -u bob -n fta-fab-net -c bob/admin-pub.pem -k bob/admin-priv.pem
Import the business network card
composer card import -f bob#fta-fab-net.card
Test the connection to the blockchain business network
composer network ping -c bob#fta-fab-net
24 Start the RESTful API composer-rest-server
Answer the questions as given below:
Enter the name of the business network card to use: alice#fta-fab-net
Specify if you want namespaces in the generated REST API: never use namespaces
Specify if you want to use an API key to secure the REST API: No
Specify if you want to enable authentication for the REST API using Passport: No
Specify if you want to enable the explorer test interface: Yes
Specify a key if you want to enable dynamic logging: dts
Specify if you want to enable event publication oevr websockets: Yes
Specify if you want to enable TLS Security for the REST API: No
Open browser and go to URL
http://localhost:3000/explorer
This will open rest server
A fresh installation of CentOS 7 needs a fresh installation of PostgreSQL, with a new user and a new role. I am following the steps described in this tutorial to accomplish this goal. However, the terminal is not providing the interactive menu that the tutorial promises when I type createuser -interactive. Instead, I get the following blank prompt:
[this_centos_user#localhost ~]$ sudo -i -u postgres
[sudo] password for this_centos_user:
-bash-4.2$ createuser –interactive
-bash-4.2$
What specific commands need to be typed in order to get the interactive createuser interface to appear and let me give a username, password, etc.?
The Specific Situation:
1.) First, I installed the postgresql-server package and the "contrib" package with the following command:
sudo yum install postgresql-server postgresql-contrib
2.) Next, I created a new PostgreSQL database cluster:
sudo postgresql-setup initdb
3.) I then set up password authentication editing PostgreSQL's host-based authentication (HBA) configuration by typing sudo vi /var/lib/pgsql/data/pg_hba.conf and changing the following lines to include md5 instead of ident:
host all all 127.0.0.1/32 md5
host all all ::1/128 md5
4.) After saving and exiting vi, I started and enabled PostgreSQL with the following:
sudo systemctl start postgresql
sudo systemctl enable postgresql
5.) Next, I logged in to PostgreSQL with the postgres account that we created above, and tried to create the user with the code from the top of the OP above, as follows:
[this_centos_user#localhost ~]$ sudo -i -u postgres
[sudo] password for this_centos_user:
-bash-4.2$ createuser –interactive
-bash-4.2$
So how do I create this user?
Not a direct answer to the question but related:
If you're like me and you've typed createuser --interactive inside of psql, and you're wondering why you've just gone from postgres=# to postgres-#, you've misunderstood the instructions. Exit psql with Ctrl+C and then \q and run createuser --interactive from the shell as the postgres user.
There appears to have been a typo in the tutorial. The correct syntax is:
-bash-4.2$ createuser –-interactive
Note that --interactive in this answer is correct, while -interactive in the OP was wrong.
I'm using virtual box to act as Linux guest to Cassandra DB, and I'm trying to access it through my windows host, however, i don't know what are the right configurations to do that.
on virtual box I'm using "host only networking" to communicate from windows.
Anyone knows how to do these configurations?
Maybe, it's the network configuration of the guest.
In VirtualBox environment, if you want communicate to the guest from the host, the network type of the VM must be "bridged networking" or "host only networking".
You can find more information here : https://www.virtualbox.org/manual/ch06.html.
Access Cassandra on Guest VM from Host OS
For future reference to myself and others, this worked for me for Cassandra v3.10:
http://grokbase.com/t/cassandra/user/14cpyy7bt8/connect-to-c-instance-inside-virtualbox
Once your Guest VM is provisioned with Cassandra, I had a host only network adapter set with IP 192.168.5.10.
Then had to modify /etc/cassandra/cassandra.yaml to set:
From
rpc_address: localhost
To
rpc_address: 192.168.5.10
Then run sudo service cassandra restart and give it 15 seconds...
Then on the guest VM or on the host the following worked:
cqlsh 192.168.5.10
Hope that helps someone.
Vagrantfile for reference
Note it doesn't work for multiple nodes in a cluster yet
# Adjustable settings
## Cassandra cluster settings
mem_mb = "3000"
cpu_count = "2"
server_count = 1
network = '192.168.5.'
first_ip = 10
servers = []
seeds = []
cassandra_tokens = []
(0..server_count-1).each do |i|
name = 'cassandra-node' + (i + 1).to_s
ip = network + (first_ip + i).to_s
seeds << ip
servers << {'name' => name,
'ip' => ip,
'provision_script' => "sleep 15; sudo sed -i -e 's/^rpc_address: localhost/rpc_address: #{ip}/g' /etc/cassandra/cassandra.yaml; sudo service cassandra restart;",
'initial_token' => 2**127 / server_count * i}
end
# Configure VM server
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "ubuntu/xenial64"
servers.each do |server|
config.vm.define server['name'] do |x|
x.vm.provider :virtualbox do |v|
v.name = server['name']
v.customize ["modifyvm", :id, "--memory", mem_mb]
v.customize ["modifyvm", :id, "--cpus" , cpu_count ]
end
x.vm.network :private_network, ip: server['ip']
x.vm.hostname = server['name']
x.vm.provision "shell", path: "provision.sh"
x.vm.provision "shell", inline: server['provision_script']
end
end
end
provision.sh
# install Java and a few base packages
add-apt-repository ppa:openjdk-r/ppa
apt-get update
apt-get install vim curl zip unzip git python-pip -y -q
# Java install - adjust if needed
# apt-get install openjdk-7-jdk -y -q
apt-get install openjdk-8-jdk -y -q
# Install Cassandra
echo "deb http://www.apache.org/dist/cassandra/debian 310x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
curl https://www.apache.org/dist/cassandra/KEYS | sudo apt-key add -
sudo apt-get update
sudo apt-get install cassandra -y
sudo service cassandra start
So you are trying to connect to Cassandra from Linux guest in your virtualbox? Or is it the other way around?
Anyways, whatever the direction make sure that you IP is reachable and that the Cassandra ports are open (start with 9042).