How to throttle bandwidth for OverGrive in Linux (Debian)? - linux

I've installed trickle but can't seem to get it to throttle overGrive.
$ trickle -d 500 -u 100 overgrive
trickle: Could not reach trickled, working independently: No such file or directory
trickle: exec(): No such file or directory
Is there another way to get overGrive to stop sucking up all my bandwidth while syncing?

I managed to solve the issue I had and my overGrive works fine for the last couple of weeks. It turned out that it had synchronized with some public files created by different users, and which didn't have anything in common with my Google Drive account. What was common for all these files is that it belonged to some courses and had names like: "MATH&141, Mod 01: Quiz 2 Toolkit". For some reason these files didn't have .doc extension and had symbols & and : in names, which seems to cause overGrive stack on it forever.
Anyway, I did perfom the following steps and it fixed the issue:
downloaded and installed the latest version of overgrive.
clear all trash files from Google Drive online
delete all files from your local Google Drive folder if present and restart overGrive:
.overgrive.lastsync
.overgrive.cache
turn off automatic sync, and start synchronization manually.
wait until full synchronization is finished.
You can check in your Home folder log file called .overgrive.log to see if there are some errors during the synchronization. It might happen that overGrive blocks on some specific file and try to synchronize it over and over again causing large use of download/upload.

Related

NT Authority/System can't see protected OS files

The Question:
Why can't the LocalSystem account (NT Authority\System) see files in the Recycle Bins or the Temporary Internet Files directory?
Background:
I created a scheduled task to run using the System account. The purpose of the task is to execute the Disk Cleanup Utility with predefined setting (for example: cleanmgr.exe sagerun:1). When it executes, it seems to run with no errors. But when I check the resources it's supposed to clean (Temporary Internet Files, Recycle Bin etc.), they're still there.
So I thought maybe cleaning up the two resources manually might work. I developed a console application in C# that clears the Recycle Bin and the Temporary Internet Files. I test it and it works just fine. But again, when I attempt to run it as a scheduled task with the System account, I run into the same issue again.
Following the log, it looks like when running the application with System account, it sees no files are in the Recycle Bin or the Temporary Internet Files directory.
Upon checking the Security tab for the Temporary Internet Files directory, it shows System as a full access account to that directory.
I'm so puzzled by this issue. I may be missing something but I assumed the LocalSystem account has the highest privilege on a machine. Is that not the case?

How to monitor a directory for file changes without using inotifywait?

I require a VM for developing and my host is where my IDE is. I have discovered that inotifywait does not work with shared folders, as I am sharing a local folder with my Linux guest using Virtual Box.
Basically, I have a simple bash script which needs to watch a directory and wait for any file changes. Inotifywait would be the best option but I cannot get it to work with my shared folder.
I was wondering if there is another option for my problem?
Depending on the sizes of the files and the nature of the changes you could:
Create a checksum (md5, CRC, SHA256) of the files and watch for changes
check the size of the files and watch for changes

Renaming executable's image name is giving it write permission

Dear community members,
We have three of same hardware Windows 7 Professional computers. No one of them is connected to a domain or directory service etc.
We run same executable image on all three computers. In one of them, I had to rename it. Because, with my application's original filename, it has no write access to it's working directory.
I setup full access permisions to USER group in working directory manually but this did not solve.
I suspect some kind of deny mechanism in Windows based on executable's name.
I searched the registry for executable's name but I did not find something relevant or meaningfull.
This situation occured after lot of crashes and updates of my program on that computer (I am a developer). One day, it suddenly started not to open files. I did not touch registry or did not change something other on OS.
My executable's name is karbon_tart.exe
When it start, it calls CreateFile (open mode if exist or create mode if not exist) to open karbon_tart.log file and karbon_tart.ini file.
With the files are exist and without the file exists, I tried two times and none of them, the program can open the files.
But if I just rename the name to karbon_tart_a.exe, program can open files no matter if they are exist or not.
Thank you for your interest
Regards
Ömür Ölmez.
I figured out at the end.
It is because of an old copy of my application in Virtual Store.

source code location for debugging multiple instance of an application

Hi have an application running separateley (1 instance for customer) in different folders, 1 per each customer.
Each customer is a separate user on my machine.
At the moment I have the source code in each of these folders where I rebuild the code per each instance. Would it be better if I do something like the following?
create a shared folder where I build the code
deploy the binary in each user folder.
allow permission for each user to access the source code in READ ONLY mode.
when it is time to debug, by using gdb in each user folder will allow to read the source code and debug will happen.
Do you think that this could be a better approach or there are better practice?
My only concern is that each user has the chance to read the source code, but since the user will not access directly his folder (it is in my control) this should not trouble me.
I am using CENTOS 6.4, SVN and G++/GDB.
in different folders
There are no "folders" on UNIX, they are called directories.
I rebuild the code per each instance
Why would you do that?
Is the code identical (it sounds like it is)? If so, build the application once. There is no reason at all to have multiple copies of the resulting binary, or the sources.
If you make the directory with sources and binaries world-readable, then every user will be able to debug it independently.

rsync --delay-updates on Cygwin doesn't work?

What I'm trying to do:
I want to launch files to a .NET based website. Any time the dlls change, Windows recycles the web app. When I rsync files over the app can recycle several times because of the delay instead of the preferred single time. This brings the site out of commission for a longer period of time.
How I tried to solve it:
I attempted to remedy this by using the --delay-updates, which is supposed to stage all of the file changes in temporary files before changing them over. This appeared to be exactly what I wanted, however, giving the --delay-updates argument does not appear to behave as advertised. There is no discernable difference in the output (with -vv), and the end behavior is identical (the app recycles multiple times rather than once).
I don't want to run Cygwin on all of the production machines for stability reasons, otherwise I could rsync to a local staging directory, and then perform a local rsync, which would be fast enough to be "atomic".
I'm running Cygwin 1.7.17, with rsync 3.0.9.
I came across atomic-rsync (http://www.opensource.apple.com/source/rsync/rsync-40/rsync/support/atomic-rsync) which accomplishes this by rsyncing to a staging directory, renaming the existing directory, and then renaming the staging directory. Sadly this does not work in a Windows setting, because you cannot rename folders with running dll files in them (permission denied).
You are able to remove folders with running binaries, however this results in recycling the app every time, rather than just when there are updates to the dlls, which is worse.
Does anyone know how to either
Verify that --delay-updates is actually working
Accomplish my goal of updating all the files atomically (or rather, very very quickly)?
Thanks for the help.
This is pretty ancient, but I eventually discovered that --delay-updates was actually working as intended. The app only appeared to be recycling multiple times due to other factors.

Resources