I am having trouble adding an external process to my CouchDB database. Currently the database contains a few records, all of which have standalone attachments in the form of PNG or JPG. I want to add the Couch_Image_Resizer (by KlausTrainer) to the database so that I can use the queries offered by the Image Resizer to dynamically resize the images on request. However currently it only returns an error when the URL command is used:
http://virtualMachineAddress/_image/archive/test/the_starry_night_painting.jpg?resize=500x500
{"error":"error","reason":"{conn_failed,{error,econnrefused}}"}
I have followed the instructions to the letter, replacing any instance of localhost or 127.0.0.1 with the IP address of my virtual machine (which has been made elastic so should never change) where needed.
I have also altered the local.ini file as was instructed so that it includes the following:
[httpd_global_handlers]
_image = {couch_httpd_proxy, handle_proxy_req, <<"http://127.0.0.1:5985">>}
Finally I have ensured that the program is running via the ./start.sh command. If this is run more than once it returns the following, I am usure as to if it is relevant:
root#couchdb couchdb/couch_image_resizer# ./start.sh
Crash dump was written to: erl_crash.dump
Kernel pid terminated (application_controller) {application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})
Crash dump was written to: erl_crash.dump
Kernel pid terminated (application_controller) {application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}})`
Some info that might be helpful
erl_crash.dump: pastebin
Server is a virtual AWS machine running Debian 7.9 Wheezy.
The database is hosted externally on this server.
CouchDB version: 1.2.0
The database is not in Admin Party mode, accounts with permissions are in use.
GitHub link: Couch_Image_Resizer
Erlang: erts-5.9.1 64-bit
ImageMagick: 6.8.9-9
I am clearly missing something here, if you need anything else just ask. If anyone can shed any light on what I am missing I would greatly appreciate it!
I have found a solution to this although there may be others.
Stop the service, set its permissions to be owned exclusively by the couchdb user and then adding the start.sh file path to the [osdaemon] section of CouchDB's local.ini before restarting the database and also launching the application as a root user. Doing this was able to kick-start the service and it now functions normally and as intended.
Related
I have run into a problem today where I am unable to connect via SSH to my Google Cloud VM instance running debian-10-buster. SSH has been working until today when it suddenly lost connection while docker was running. I've tried rebooting the VM instance and resetting, but the problem still persists. This is the serial console output on GCE, but I am not sure what to look for in that, so any help would be highly appreciated.
Another weird thing is that earlier today before the problem started, my disk usage was fine and then suddenly I was getting a bunch of errors that the disk was out of space even after I tried clearing up a bunch of space. df showed that the disk was 100% full to the point where I couldn't even install ncdu to see what was taking the space. So then I tried rebooting the instance to see if that would help and that's when the SSH problem started. Now I am unable to connect to SSH at all (even through the online GCE interface), so I am not sure what next steps to take.
Your system has run out of disk space for the boot (root) file system.
The error message is:
Root filesystem has insufficient free space
Shutdown the VM, resize the disk larger in the Google Cloud Web GUI and then restart the VM.
Provided that there are no uncorrectable file system errors, your system will startup, resize the partition and file system, and be fine.
If you have modified the boot disk (partition restructuring, added additional partitions, etc) then you will need to repair and resize manually.
I wrote an article on resizing the Debian root file system. My article goes into more detail than you need, but I do explain the low level details of what happens.
Google Cloud – Debian 9 – Resize Root File System
I have installed Docker and Portainer on my Asustor home NAS. This issue appears to be specific to the implementation of Docker/Portainer provided in their app store. I have been working directly with the Portainer staff and they have not seen this issue before.
I have been following instructions from Portainer (https://www.youtube.com/watch?v=V0OvPyJZOAI) to deploy an agent in the program and found where Docker stores volumes (non-standard Linux location), however now I am getting an error that I believe is also caused by the non-standard implementation on Linux used by the NAS OS. This error happens when I go to start the service when following the stops in the video linked above. The error I am getting is "starting container failed: error creating external connectivity network: cannot restrict inter-container communication: please ensure that br_netfilter kernel module is loaded"
The response I got from Asustor Support is
The kernel module is in the [NAS OS].
So if you want you to need to manually insert the module to have it
working.
But please note that we have not yet tested it so there might be a
risk to the stability of the system.
I have located the filepath of the kernal module by logging in via SSH but I do not know what I need to do in regards to inserting the module as the Asustor support team told me to do.
Screenshot of Portainer error
This is a duplicate of a post I have created in the docker forum. Thus I am going to close this / the other one once this problem is solved. But since no one answers in the docker forum and my problem persists, I'll post it again, looking forward to get an answer.
I would like to expose a server monitoring app as a docker container. The app I have written relies on /proc to read system information like CPU utilization or disk stats. Thus I have to forward the information provided in hosts /proc virtual file system to my docker container.
So I made a simple image (using the first or second intro on docker website: Link) and started it:
docker run -v=/proc:/host/proc:ro -d hostfiletest
Assuming the running container could read from /host/proc to obtain information about the host system.
I fired up a console inside the container to check:
docker exec -it {one of the funny names the container get} bash
And checked the content of /host/proc.
Easiest way to check it was getting the content of /host/proc/sys/kernel/hostname - that should yield the hostname of the vm I am working on.
But I get the hostname of the container, while /host/proc/uptime gets me the correct uptime of the vm.
Do I miss something here? Maybe something conceptual?
Docker version 17.05.0-ce, build 89658be running on Linux 4.4.0-97-generic (VM)
Update:
I found several articles describing how to run a specific monitoring app inside a containing using the same approach I mentioned above.
Update:
Just tried using an existing Ubuntu image - same behavior. Running the image privileged and with pid=host doesn't help.
Greetings
Peepe
The reason of this problem is that /proc is not a normal filesystem. According to procfs, it is like an interface to access some kernel data and system information. This interface provides a file-like structure, so it can make people misunderstand that it is a normal directory.
Files in /proc are also not normal files. They are empty (size = 0). You can check by yourself.
$ stat /proc/sys/kernel/hostname
File: /proc/sys/kernel/hostname
Size: 0 Blocks: 0 IO Block: 1024 regular empty file
So the file doesn't hold any data, but when you read the file, the kernel will dynamically return to you a corresponding system information.
To answer your question, /proc/sys/kernel/hostname is just an interface to access the hostname. And depending on where you access that interface, on the host or on the container, you will get the corresponding hostname. This is also applied when you use bind mount -v /proc:/hosts/proc:ro, since bind mount will provide an alternative view of /proc. If you call the interface /hosts/proc/sys/kernel/hostname, the kernel will return the hostname of the box where you are in (the container).
In short, think about/proc/sys/kernel/hostname as a mirror, if your host stands in front of it, it will reflect the host. If it is the container, it will reflect the container.
I know its a few months later no but I came across the same problem today.
In my case I was using psutil in Python to read disk stats of the hosts from inside a docker container.
The solution was to mount the whole host files system as read only into the docker container with -v /:/rootfs:ro and specify the path to proc as psutil.PROCFS_PATH = '/rootfs/proc'.
Now the psutil.disk_partitions() lists all partitions from the host files system. As the hostname is also contained within the proc hierarchy, I guess this also works for other host system information as long the the retrieving command points to /rootsfs/proc.
I'm writing a small program in Free Pascal on Linux and connecting to a Firebird database on the same server. For testing, I initially wrote a console application using the TIBConnection components in FP and successfully connected to the Firebird database and listed records from one of the tables.
Now I'm wanting to do the same thing from a CGI application under Apache. A sample CGI app with various parameters displays different HTML results via the WebBroker "actions" like expected.
So both preliminary tests, connecting to Firebird and getting a CGI web app running, have worked. The final test is to combine them and that's where my problem is.
Whenever I run the test cgi application and try to connect to the Firebird database, I get a "permission denied" error. I've left the username, password, and port all at defaults, have checked the firewall, switched between "localhost" and "127.0.0.1" and several other things including setting the permissions on the database file to read/write globally (for temporary testing, of course).
I've found lots of information on the internet about connecting to Firebird on Linux and lots of information about writing CGI applications, but very little where it combines the two subjects. I'm sure there's a subtle yet important security or firewall issue, but it eludes me.
CentOS 6.6 64-bit on a virtual machine
Firebird 2.1.7 64-bit
Lazarus 1.4.0 64-bit
Anyone have any suggestions on what I could try?
I figured out how to get it working by reading the solution to a different problem. Not sure why disabling the firewall didn't work (I had to completely uninstall it) and don't know what SELinux is yet (had to set it to "permissive"), but I will need to study those two issues so the live server won't be left vulnerable.
I have a Linux server on which some LDAP server is running.
It is started/stopped using command: start-slapd / stop-slapd
Does it mean that the slapd is the LDAP Server that is running?
I also see OpenLDAP related files/installation on that server, but i am not sure if they are being used.
Is my understanding correct - slapd can function independent of OpenLDAP?
I need to setup similar LDAP Server on another machine with same LDAP data. Should i just install slapd, and import the data.
I am new to LDAP world, seeking advice.
'slapd' is the name of the OpenLDAP daemon. They aren't two different things.
On my new Linux Server i can see OpenLDAP files under /etc/openldap, but i cannot see slapd ('locate slapd' returns nothing). So how do i start openldap then? – Jasper
OK Jasper, try:
which slapd
... from your command-line. If the slapd binary is located at a valid path, it will show you the fully qualified path to said file.
This is different from 'locate', as 'locate' works using a file manifest that must be updated periodically in order to be accurate. Using 'which' will just search all "bin" or "sbin" locations and report the first one that is found.
Does it mean that the slapd is the LDAP Server that is running?
To see if slapd is running, from the command-line, try one of these commands:
ps -ef | grep [s]lapd
pidof slapd
The first command above will show you more information about slapd IF it is running. You'll see the process ID, the owning user of the process, the time, and the full set of arguments.
The second command just shows a process ID, which is nice and succinct.
I need to setup similar LDAP Server on another machine with same LDAP data. Should i just install slapd, and import the data.
It depends. Do these two servers ALWAYS need to be identical if a change is made? Or is one for Production use and one for Testing purposes?
If the data needs to be persistent across all servers at all times, then you need to configure what is called Replication. You would define your 1st server as a master, and any subsequent servers as 'shadows' (a.k.a slaves), and configure the shadows to receive updates from the master automatically. Replication is a fairly deep concept, so see the OpenLDAP Administrators Guide at http://www.openldap.org/doc/admin24/ for documentation on setting up, understanding and troubleshooting replication.
If the servers DO NOT need to be the same, then yes (as you said) just building a new server and importing data manually is perfectly fine.
I hope this helps...
Max