What is the location or path of the logs stored after bootstraping on a node?
For example, from the workstation,
knife cookbook bootstrap IP -x name -P passwd -r role[dummy]
Where does the logs gets stored ..?
The logs are stored in /var/log/chef on most systems:
/var/log/chef
Related
I am a beginner in Hadoop, I made a Hadoop Cluster (one master and two slaves) on Google Cloud Platform.
I accessed the master of the cluster using from the local file system (Linux): ssh -i key key#public_ip_of_master
Then I did sudo su - inside the cluster because Hadoop functions only appears while being root.
Then I initiated the HDFS using start-dfs.sh and start-all.sh
Now the problem is that I want to tranfer files from the local Linux file system to the Hadoop Cluster and vice versa using the following command (inserting the command inside the cluster while being root):
root#master:~# hdfs dfs -put /home/abas1/Desktop/chromFa.tar.gz /Hadoop_File
The problem is that the local path which is: /home/abas1/Desktop/chromFa.tar.gz is never recognized and I can not seem to know what to do.
I am sure I am missing something trivial but I do not know what it is. I have to use either -copyFromLocal or -put.
local path is never recognized
That is not a Hadoop problem, then. You are on the master node (over SSH), as the root user. There is a /root folder with files, and probably no /home/abas1.
In other words, run ls -l /home, and you see what local files are available.
To get files to the master server to upload from that terminal session, you will want to SCP files first to there from a different machine.
Exit the SSH session
scp -i key root#master-ip home/abas1/Desktop/chromFa.tar.gz /tmp
ssh -i key root#master-ip
Then you can do this
hdfs mkdir /Hadoop_File
ls -l /tmp | grep chromFa # for example, to check file
hdfs -put /tmp/chromFa.tar.gz /Hadoop_file/
Hadoop functions only appears while being root.
Please do not use root for interacting with Hadoop services. Create unique user accounts for HDFS, YARN, Zookeeper, etc. with restricted permissions like you would for any other Unix process.
Using DataProc will do this... And you can still SSH to it, so you should really considering using it instead of manual GCE cluster.
we have a problem with Apache Directory Studio (Version: 2.0.0.v20200411-M15 and previous).
When we connect to an OpenLDAP instance we can't see the naming contexts of the Root DSE.
We have the OpenLDAP configured as multi-master and the strange thing is that in one node we can see it and not in the other.
Root DSE with missing entry screenshot
Root DSE showing the entry screenshot
When we try to get the info via ldapsearch, we can see the correct naming context for both nodes:
ldapsearch -H ldap://localhost:16389 -x -D "cn=adminLDAPI,ou=system,dc=XXXXXXX,dc=com" -W -s base -a always "(objectClass=*)" "namingcontexts"
ldapsearch finds namingcontexts screenshot
If we try to fetch the Base DNs with the connection options, it is not fetched on the "wrong" node.
No base DN returned screenshot
And neither entering manually works.
I've checked cn=config and cn=database but is all the same.
Any ideas?
Thanks in advance
Currently, I use
$ sudo service postgresql start
to start the PostgreSQL server and
$ sudo -u postgres createdb testdb --owner ownername
to create a database. However, these commands need root privilege. How can I do these without root privilege/sudo on Linux (Ubuntu)?
You can run PostgreSQL without root privs by creating a new instance (which PostgreSQL calls a "cluster") and starting it.
You can't use the Ubuntu init scripts, wrapper tools like pg_ctlcluster, etc if you do this. You must use only PostgreSQL's own tools.
To create the new PostgreSQL instance with the superuser equal to your username, data directory in your home directory, and md5 auth enabled by default, use:
initdb -D $HOME/my_postgres -A md5 -U $USER
Adjust as desired; see initdb --help.
You'll then need to edit postgresql.conf to change the port to a non-default one, since your system probably runs its own postgres on the default port 5432. (If you want to limit access strictly to you, you can instead set listen_addresses = '' and unix_socket_directories = /home/myuser/postgres_socket or whatever. But it's simpler to just use a different port.)
To start it:
pg_ctl -D $HOME/my_postgres -w start
To connect to it, specify the port you chose:
psql -p 5434 ...
(If you changed unix_socket_directories you'll also want to specify the path you gave, like -h /home/myuser/postgres_socket.)
To make psql etc connect to your postgres by default, edit your ~/.bashrc to add something like
export PGPORT=5434
but note that'll also affect the default port for connections to other hosts.
To stop it:
pg_ctl -D $HOME/my_postgres -w stop
but you can also just shut down without stopping it, it doesn't care and will recover safely when you start it next.
To autostart it when you log in when it's set up in your home directory you'd have to use your desktop environment's run-at-startup features. They vary depending on environment and version so I can't give details here; it's different for GNOME 3, Unity (ubuntu), KDE, XFCE, etc.
Note that this approach still uses the system packages for PostgreSQL. This is important because if you uninstall (say) PostgreSQL 9.4 and install 9.6, your copy in your home dir will stop working. If you want it entirely independent of system packages, as you probably do if you don't control the system, you should compile PostgreSQL from source or use the binary installer to install inside your home directory.
Postgres can run without root permission.
Just download from
https://www.enterprisedb.com/download-postgresql-binaries
and run
Init database
./initdb -D /data
Run postgres
./bin/postgres -D /data
Create database
./bin/createdb mydb
Connect with psql
./bin/psql mydb
(https://www.golery.com/pencil/vU)
I have downloaded the sandbox from hortonworks (Centos OS), then tried to follow the tutorial. It seems like the ambari-admin-password-reset command is not there and missing. I tried also to login with putty, the console asked me to change the password so I did.
now it seems like the command is there, but I have different passwords for the console and one for the putty for the same user.
I have tried to look for the reason why for the same user 'root' I have 2 different passwords (one for the virtual box console and one for the putty) that I can login with. I see different commands on each box. more than that when I share folder I can only see it on the virtual box console but not on the putty console) which is really frustrating.
How can I enforce that what I would see from putty would be the same as what I see from the virtual box console.
I think it somehow related to TTY but I am not sure.
EDIT:
running commands from the virtual box machine output:
grep "^passwd" /etc/nsswitch.conf
OUT: passwd: files sss
grep root /etc/passwd
OUT: rppt"x"0"0"root:/root:/bin/bash
operator:x:11:0:operator:/root:/sbin/nologin
getent passwd root
OUT: root:x:0:0:root:/root:/bin/bash
EDIT:
I think this is all about docker containers. It seems like the machine 2222 port is the ssh port for the hdp 2.5 container and not for the hosting machine.
Now I get another problem. when running
docker exec sandbox ls
it is getting stuck. any help ?
Thanks for helpers
So now I had the time to analyze the sandbox vm, and write it up for other users.
As you stated correctly in your edit of the question, its the docker container setup of the sandbox, which confuses with two separate root users:
via ssh root#127.0.0.1 -p 2222 you get into the docker container called "sandbox". This is a CentOS release 6.8 (Final), containing all the HDP services, especially the ambari service. The configuration enforces a password change at first login for the root user. Inside this VM you can also execute the ambari-admin-password-reset and set there a password for the ambari admin.
via console access you reach the docker host running a Centos 7.2, here you can login with the default root password for the VM as found in the HDP docs.
Coming to your sub-question with the hanging docker exec, it seems to be a bug in that specific docker version. If you google that, you will find issues discussing this or similar problems with docker.
So I thought that it would be a good idea to just update the host via yum update. However this turned out to be a difficult path.
yum tried to update the kernel, but complained that there is not enough space on the boot partion.
So I moved the boot partion to the root partition:
edit /etc/fsab and comment out the boot entry
unmount /boot
mv /boot
cp -a /boot.org /boot
grub2-mkconfig -o /boot/grub2/grub.cfg
grub2-install /dev/sda
reboot
After that I have found out that the docker configuration is broken and docker does not start anymore. In the logs it complained about
"Error starting daemon: error initializing graphdriver:
\"/var/lib/docker\" contains other graphdrivers: devicemapper; Please
cleanup or explicitly choose storage driver (-s )"
So I edited /etc/systemd/system/multi-user.target.wants/docker.service and changed the ExecStart setting to:
ExecStart=/usr/bin/dockerd --storage-driver=overlay
After a service docker start and a docker start sandbox. The container worked again and I could could login to the container and after a ambari-server restart everything worked again.
And now - with the new docker version 1.12.2, docker exec sandbox ls works again.
So to sum up the docker exec command has a bug in that specific version of the sandbox, but you should think twice if you want to upgrade your sandbox.
I ran into the same issue.
The HDP 2.5 sandbox runs all of its components in a docker container, but commands like docker exec -it sandbox /bin/bash or docker attach sandbox got stuck.
When I ran a simple ps aux, I found several /usr/bin/docker-proxy commands which looked like :
/usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 60000 -container-ip 172.17.0.2 -container-port 60000
They probably forward the HTTP ports of the various UIs of HDP components.
I could ssh into the container ip (here 172.17.0.2) using root/hadoop to authenticate. From there, I could use all "missing" commands like ambari-admin-password-reset.
$ ssh root#172.17.0.2
... # change password
$ ambari-admin-password-reset
NB: I am new to docker, so there's probably a better way to deal with this.
I'd like to post here the instructions for 3.0.1 here.
I followed the instructions of installing hortonworks version 3.0.1 here: https://youtu.be/5TJMudSNn9c
After running the docker container, go to your browser and enter "localhost:4200", that will take you to the in browser terminal of the container, that hosts ambari. Enter "root" for login and "hadoop" for password, change the root password, and then enter "ambari-admin-password-reset" in order to reset ambari password.
In order to be able to use sandbox-hdp.hortonworks.com, you need to add the line "127.0.0.1 sandbox-hdp.hortonworks.com" at the end of the /private/etc/hosts file on your mac.
Incorrect Pass
Then right corner click on power button >> power off drop down >> Restart >> when it boots up then press Esc key to get into recovery menu
Restart
select advance option and hit enter
Advance Option
Select Recovery mode hit enter
Select Root
Root enter
Command
mount -rw -o remount/
ls /home
change pass command
passwd username
user as yours
last step
enter pass two times by pressing enter
enter image description here
Hopefully you changed password (:
In my use case I am using single ec2 instance [not a cluster]. I want to create a database and an user with all privileges programmatically? Is there a config file which I can edit and copy to the right location after influxdb is installed.
Could someone help me with this?
There isn't any config option that you can use to do that with InfluxDB itself. After starting up an instance you can use the InfluxDB HTTP to create the users. The curl command to do so would be the following:
curl "http://localhost:8086/query" --data-urlencode "q=CREATE USER myuser WITH PASSWORD 'mypass' WITH ALL PRIVILEGES"
Just run this command for each of the users you'd like to create. After that, you'll need to enabled the auth value of the [http] section of the config.
you can use ansible to setup influxb with your own recipe.
here's the ansible module documentation that you can use
http://docs.ansible.com/ansible/influxdb_database_module.html
or, any config/deploy manager that you prefer. i'd do this anyday instead of some ssh script or who knows what.
https://forge.puppet.com/tags/influxdb
chef.
https://github.com/bdangit/chef-influxdb
and also, you can use any of the above config managers to provision/manipulate your ec2 instance(s).
Use the admin token and this command (InfluxDB 2.3 CLI)
.\influx.exe user create -n yourusername -p yourpassword -o "your org name" --token admintokengoeshere