Why is there activity on our FTP server while Cloudberry FTP backup says job is finished? - cloudberry

Here is the setup
We are testing Cloudberry for backing up files to a remote FTP server
As a test we are backing up files on a desktop, using Cloudberry FTP to a FTP server (FileZilla server) located on the same desktop. The FileZilla Server in turns is accessing a Synology NAS located on the same network.
The job is set to run every 24 hours
According the Cloudberry interface, it was last run at midnight and latested 1h 31min
There are no running jobs showing in Cloudberry interface
HOWEVER, it is 9AM , FileZilla server is still showing files upload. Filezilla has a counter to keep track on the number connection. The count is currently at 1.2million, but thereare only ~ 70,000 file being backed up.
I deleted the job and created a new one with the same result
So what is going on?
Alex

Found the root cause of this issue.
By looking through the logs in %programdata%\CloudBerryLab\CloudBerry Backup\Logs, I found that a Consistency job was running every hour...
No matter how many times I checked the Backup Job definition, this setting was never shown as it is only displayed in the Welcome tab, not the Backup Plans tab...
Changed the Consistency job to running weekly.
Hope this will help somebody else
Note: Disappointed with the lack of Support from CloudBerry given that Stackoverflow is officially their Support page as per http://www.cloudberrylab.com/support.aspx?page=support

Related

Lotus Notes not running as scheduled

I have 3 agents in Lotus, these agents just update different CSV files on a shared drive. Based on their logs, they are running but only took a second. Checking the CSV files, they are not updating.
I've tried to adjust the schedule time
Tried other servers
Changed the Target
Disable/re-enable the agent
Made a copy of the agent
I haven't edit the code.
Workaround is to run these agents manually. It actually updates the CSV files and its taking at least 5 minutes for the agents to finish running which is expected. These agents just suddenly stop running as scheduled.
As Torsten mentioned, your Domino server does not have enough permissions. Per default it runs as local system, which does not have access to any shares.
See this technote before it disappears https://www.ibm.com/support/pages/domino-server-unable-locate-mapped-drives-when-started-windows-service

ADP FTP Linked Service losing connection mid-file transfer

I am trying to copy data from an (azure vm) ftp hosted .csv file. When i execute the data pipeline i can see the ftp log and it initiates the file transfer, but at ~11mb it servers, attempts to reconnects, successfully reconnects, but then immediately disconnects.
Has anyone encountered this?
I can successfully transfer the file from cyberduck to local machine - and if i delete a lot of data from the csv, making it much smaller in size, the pipeline works correctly.
I have gone through passive ftp settings and vm firewall settings but i still cannot successfully get the file to completely transfer.
I realized this is not a data transfer issue but a file format issue. It seem like the FTP transfer protocol doesn't download the entire file and then load - it seems to do partials and there was a piece of it that was breaking (double quotes in a string and double quote identifier")
So it's an issue but a different issue and not related to this question.

Moving files from multiple Linux servers to a central windows storage server

I have multiple Linux servers with limited storage space that create very big daily logs. I need to keep these logs but can't afford to keep them on my server for very long before it fills up. The plan is to move them to a central windows server that is mirrored.
I'm looking for suggestions on the best way to this. What I've considered so far are rsync and writing a script in python or something similar.
The ideal method of backup that I want is for the files to be copied from the Linux servers to the Windows server, then verified for size/integrity, and subsequently deleted from the Linux servers. Can rsync do that? If not, can anyone suggest a superior method?
You may want to look into using rsyslog on the linux servers to send logs elsewhere. I don't believe you can configure it to delete logged lines with a verification step - I'm not sure you'd want to either. Instead, you might be best off with an aggressive logrotate schedule + rsyslog.

Backup server for a NAS with web interface

I'm evaluating the features of a full-fledged backup server for my NAS (synology). I need
FTP access (backup remote sites)
SSH/SCP access (backup remote server)
web interface (in order to monitor each backup job)
automatic mail alerting if jobs fail
lightweight software (no mysql, sqlite ok)
optional: S3/Glacier support (as target)
optional: automatic long-term storage after a given time (ie local disk for 3 months, Glacier after that)
seems like biggest player are Amanda, Bacula and duplicity (likewise)
Any suggestion?
thanks a lot
Before jumping on the full server backups, please clarify these questions:
Backup software's are agent and non agent based, which one do you want to use?
Are you interested to go for open source or proprietary software?
Determine your source and destination are they in the same LAN or in the Internet. Try to get the picture of the bandwidth between source and destination and the volume of data getting backed up?
Also if you are interested try to know gui requirements and various other os platform support for backup software.
Importantly try to know the mail notification configuration.
Presently am setting one for my project and so far have installed bacula-v7.0.5 with webmin as gui. Trying the same config in the amazon cloud utilizing s3 as storage by mounting s3fs into the ec2 instance.
My bacula software is a free community version.Haven't explored the mail notification until now.

FTP suddenly refuses connection after multiple & sporadic file transfers

I have an issue that my idiot web host support team cannot solve, so here it is:
When I'm working on a site, and I'm uploading many files here and there (small files, most of them a few dozen lines at most, php and js files mostly, with some png and jpg files), after multiple uploads in a very short timeframe, the FTP chokes on me. It cuts me off with a "refused connection" error from the server end as if I am brute-force attacking the server, or trying to overload it. And then after 30 minutes or so it seems to work again.
I have a dedicated server with inmotion hosting (which I do NOT recommend, but that's another story - I have too many accounts to switch over), so I have access to all logs etc. if you want me to look.
Here's what I have as settings so far:
I have my own IP on the whitelist in the firewall.
FTP settings have maximum 2000 connections at a time (Which I am
nowhere near close to hitting - most of the accounts I manage
myself, without client access allowed)
Broken Compatibility ON
Idle time 15 mins
On regular port 21
regular FTP (not SFTP)
access to a sub-domain of a major domain
Anyhow this is very frustrating because I have to pause my web development work in the middle of an update. Restarting FTP on WHM doesn't seem to resolve it right away either - I just have to wait. However when I try to access the website directly through the browser, or use ping/traceroute commands to see if I can reach it, there's no problem - just the FTP is cut off.
The ftp server is configured for such a behavior. If you cannot change its configuration (or switch to another ftp server program on the server), you can't avoid that.
For example vsftpd has many such configuration switches.
Going to something else like scp or ssh should help
(I'm not sure that calling idiot your web support team can help you)

Resources