I'm running couchdb 1.0.1 on ubuntu and everything is working OK - except that I've just seen that my log files are non existent. They seem to have been like this for nearly a year, but to be fair I haven't really been using the system as it is a test bed for a project I've just picked up again.
/var/log/couchdb contained 2 files. An old (many months!) couch.log.1 and a couch.log with size 0 - which is suspicious. I've deleted the old files and now tried restarting couch, but the log files stubbornly stay absent!
I've restarted couch using
/etc/init.d/couchdb restart
But no joy.
My local.ini file has this entry;
[log]
level = debug
file = /var/log/couchdb/couch.log
And /var/log/couchdb is owned by couchdb and is in group couchdb so I don't think it is a permission issue. There is plenty of disk space on the server too.
I've rebooted the server as well in frustration - no difference.
How do I persuade couchdb to start logging anything again? The reason it has become an issue is that I'm trying to PUT some standalone attachments, but only the small ones are working so I'm trying to look in my (non-existent) log files to see what the problem might be.
Any ideas?
There is a possibility that the log file configuration is being set by some other .ini file.
Issue a GET request to http://localhost:5984/_config/log to see what CouchDB has set.
I had stuff like this happen to me because I had installed CouchDB multiple times using different methods. (compiling from source, using apt, the install script that was put out by CouchOne at one point, etc.) It was hard to figure out exactly what local.ini was the real one!
OK so it looks as if my LIVE ini files were actually at /usr/local/etc/couchdb/local.ini and not /etc/couchdb/local.ini
And the real logs were in /usr/local as well.
Not quite sure why I had both sets, I guess I had installed couchdb a couple of times in the past and I was looking in the legacy files by mistake!
Hope this helps someone else ... I have been scratching my head for a couple of hours over it now!
Related
I've installed trickle but can't seem to get it to throttle overGrive.
$ trickle -d 500 -u 100 overgrive
trickle: Could not reach trickled, working independently: No such file or directory
trickle: exec(): No such file or directory
Is there another way to get overGrive to stop sucking up all my bandwidth while syncing?
I managed to solve the issue I had and my overGrive works fine for the last couple of weeks. It turned out that it had synchronized with some public files created by different users, and which didn't have anything in common with my Google Drive account. What was common for all these files is that it belonged to some courses and had names like: "MATH&141, Mod 01: Quiz 2 Toolkit". For some reason these files didn't have .doc extension and had symbols & and : in names, which seems to cause overGrive stack on it forever.
Anyway, I did perfom the following steps and it fixed the issue:
downloaded and installed the latest version of overgrive.
clear all trash files from Google Drive online
delete all files from your local Google Drive folder if present and restart overGrive:
.overgrive.lastsync
.overgrive.cache
turn off automatic sync, and start synchronization manually.
wait until full synchronization is finished.
You can check in your Home folder log file called .overgrive.log to see if there are some errors during the synchronization. It might happen that overGrive blocks on some specific file and try to synchronize it over and over again causing large use of download/upload.
I have a owncloud server and the owncloud desktop client.What I want to do is to be able to delete things server wise and have it automatically delete from the pc. The problem is that the owncloud client displays a warning message of "Remove All Files"? with the choices of Remove all files or to keep files when the files are deleted from the server. Is there a way to not have the prompt come up and automatically remove all files?
In the version 2.2.3 (maybe earlier), you can change the configuration file to disable the prompt.
See the code where the prompt is invoked and the code showing the configuration file property.
If you edit (on Windows): c:\Users\myuser\AppData\Owncloud\owncloud.cfg and add the following, under the [General] section, you will no longer get the prompt.
promptDeleteAllFiles=false
The short answer: You cannot change this currently.
The long answer: The dialog was added as a safe-guard because there were cases where you could lose all your files unintentionally, e.g. if your admin re-created your account and left it empty. The client would assume the files had gone and would replicate this (it could not know better), so it would replicate the data removal locally. The code is still there today just to be safe.
If you are fearless, you can patch Folder::slotAboutToRemoveAllFiles(). Alternatively, you could open a bug report so we can solve this for everyone. What is your motivation to be able to do this without a prompt?
PS: The sources can be found on GitHub. URL and build instructions at http://doc.owncloud.org/desktop/1.5/building.html.
I have a script that processes the files that someone drops into ownCloud and it will then move them to the final storage place. However, this prompt stops the client from syncing until I manually log in to acknowledge it... I guess I will learn how to patch this.. Dropbox doesn't do this. Google Drive doesn't do this. But since I can't use cloud services (compliance issues), I have to use this solution until I can build a new secure upload means.
What I'm trying to do:
I want to launch files to a .NET based website. Any time the dlls change, Windows recycles the web app. When I rsync files over the app can recycle several times because of the delay instead of the preferred single time. This brings the site out of commission for a longer period of time.
How I tried to solve it:
I attempted to remedy this by using the --delay-updates, which is supposed to stage all of the file changes in temporary files before changing them over. This appeared to be exactly what I wanted, however, giving the --delay-updates argument does not appear to behave as advertised. There is no discernable difference in the output (with -vv), and the end behavior is identical (the app recycles multiple times rather than once).
I don't want to run Cygwin on all of the production machines for stability reasons, otherwise I could rsync to a local staging directory, and then perform a local rsync, which would be fast enough to be "atomic".
I'm running Cygwin 1.7.17, with rsync 3.0.9.
I came across atomic-rsync (http://www.opensource.apple.com/source/rsync/rsync-40/rsync/support/atomic-rsync) which accomplishes this by rsyncing to a staging directory, renaming the existing directory, and then renaming the staging directory. Sadly this does not work in a Windows setting, because you cannot rename folders with running dll files in them (permission denied).
You are able to remove folders with running binaries, however this results in recycling the app every time, rather than just when there are updates to the dlls, which is worse.
Does anyone know how to either
Verify that --delay-updates is actually working
Accomplish my goal of updating all the files atomically (or rather, very very quickly)?
Thanks for the help.
This is pretty ancient, but I eventually discovered that --delay-updates was actually working as intended. The app only appeared to be recycling multiple times due to other factors.
I've been using Cyberduck 4.2.1 to connect to my EC2 instance to edit my Node projects. I've used Node-dev to reload my project/server as files are updated, but if I save the files through Cyberduck's Edit command, the server never really reloads and usually crashes.
I've tested with a few different editors (TextMate, Dashcode) with the same result. Node-dev restarts correctly when I edit files from the terminal. I have tried a few others that do rougly the same thing, hotnode and up. They all work when editing via Terminal, but fail when I edit files through Cyberduck. I think it has something to do with the way Cyberduck replaces the remote files when it is saved.
Does anyone know what might be causing this, and maybe suggest some changes to these github projects? If not, are there better Mac FTP clients that might not have this issue?
I don't know about Node-dev, but my educated guess is that it crashes because it reads a partially uploaded file. I suggest to try the Upload with temporary filename feature available as a hidden option in Cyberduck.
You can try CyberDuck 6.6.2 ....It works for me
We're using CruiseControl.Net to do our continuous integration of our web applications.
we build the project, zip it up, copy it to the integration server. A littel bit later,
the wbe.config is cached away, the folder delted, the zip file unzipped recreating the folders, etc, then the web.config is copied back.
the issue is that somwhere in the process, one of the folders (not always the same one)
will have it's permissions totally hosed.
even the owner can't open teh folder to look at the contents.
We reboot, and everything is golden, we can delete the folder and redeploy and everythign works.
My question is, other than if anyone has expereinced anything like this, is
what tools do you suggest to try to figure out what exactly has the permissions messed up
that is nolonger doign so after rebooting.
I figure if I can get a clue about what, I can figure out why.
Thanks,
E-
You could try finding a way to run cruisecontrol.net as a limited user, that way it can't set permissions as something higher.
I'm afraid I don't remember the details But it was an obscure security setting on the Network our IT guys tweaked and it works now. I'm sorry I can't remember any more info, but I figured even this will be of some help incase someone else runs into the issue.