Log FTP connection failures - iis

I have a FTP server (IIS) in cloud. It deals with files (text) having size in GB sometimes. Customers are complaining about the connection failures or download/upload failures.
Is there any way I can log any failed (negative) action performed with my FTP server?
I have tried IFtpLogProvider in .Net but it does not give me valid FTP Status.
for example, if I start an upload & download from client & disconnect the network, still it records status as 226 which is successful transfer.
Either I am missing something with IFtpLogProvider or I have misunderstood the codes.
Is there any other way to record all the FTP transactions which will allow me to investigate the issue being faced by my customers?

I made a silly mistake. I did not enable FTP Extensibility in Windows Features within IIS. Once enabled, it started working.

Related

Notify service that synchronizing complete

There is a web service that saves, among other things, files to file storage on linux os.
To access these files, it is periodically copied to another server using lsync.
Access is provided through nginx. Synchronization occurs every 10 seconds.
The problem is that client service could request files that just been saved and get 404.
Is it possible to notify the web service that synchronization has occurred and files are available for clients using the lsync functionality?
if not, then how to solve this problem?
Thank you.

Why is there activity on our FTP server while Cloudberry FTP backup says job is finished?

Here is the setup
We are testing Cloudberry for backing up files to a remote FTP server
As a test we are backing up files on a desktop, using Cloudberry FTP to a FTP server (FileZilla server) located on the same desktop. The FileZilla Server in turns is accessing a Synology NAS located on the same network.
The job is set to run every 24 hours
According the Cloudberry interface, it was last run at midnight and latested 1h 31min
There are no running jobs showing in Cloudberry interface
HOWEVER, it is 9AM , FileZilla server is still showing files upload. Filezilla has a counter to keep track on the number connection. The count is currently at 1.2million, but thereare only ~ 70,000 file being backed up.
I deleted the job and created a new one with the same result
So what is going on?
Alex
Found the root cause of this issue.
By looking through the logs in %programdata%\CloudBerryLab\CloudBerry Backup\Logs, I found that a Consistency job was running every hour...
No matter how many times I checked the Backup Job definition, this setting was never shown as it is only displayed in the Welcome tab, not the Backup Plans tab...
Changed the Consistency job to running weekly.
Hope this will help somebody else
Note: Disappointed with the lack of Support from CloudBerry given that Stackoverflow is officially their Support page as per http://www.cloudberrylab.com/support.aspx?page=support

Azure Virtual Machine Crashing every 2-3 hours

We've got a classic VM on azure. All it's doing is running SQL server on it with a lot of DB's (we've got another VM which is a web server which is the web facing side which accesses the sql classic VM for data).
The problem we have that since yesterday morning we are now experiencing outages every 2-3 hours. There doesnt seem to be any reason for it. We've been working with Azure support but they seem to be still struggling to work out what the issue is. There doesnt seem to be anything in the event logs that give's us any information.
All that happens is that we receive a pingdom alert saying the box is out, we then can't remote into it as it times out and all database calls to it fail. 5 minutes later it will come back up. It doesnt seem to fully reboot or anything it just haults.
Any ideas on what this could be caused by? Or any places that we could look for better info? Or ways to patch this from happening?
The only thing that seems to be in the event logs that occurs around the same time is a DNS Client Event "Name resolution for the name [DNSName] timed out after none of the configured DNS servers responded."
Smartest or Quick Recovery:
Did you check SQL Server by connecting inside VM(internal) using localhost or 127.0.0.1/Instance name. If you can able connect SQL Server without any Issue internally and then Capture or Snapshot SQL Server VM and Create new VM using Capture VM(i.e without lose any data).
This issue may be occurred by following criteria:
Azure Network Firewall
Windows Server Update
This ended up being a fault with the node/sector that our VM was on. I fixed this by enlarging the size of our VM instance (4 core to 8 core), this forced azure to move it to another node/sector and this rectified the issue.

TortoiseSVN Error: Could not send request body: an existing connection was forcibly closed by the remote host

Let me preface this by saying I have basically 0 knowledge of web development. That being said, I'll still try to provide you with as much information as I possibly can. Our client is using IIS7 on a Windows Server 2008 R2 machine. The TortoiseSVN error they're getting is this:
Error: Could not send request body: an existing connection was forcibly closed by the remote host.
Using the powers of Google, it seems that there's two possible things that could be occurring here. As it is a 4GB file, I've seen people mention that it could be a configuration issue in that the timeout could be a little short, that I might need to enable a setting somewhere to allow committing of larger files or that it could be a network issue. It might be useful to note that they can commit smaller files.
I've all ready tried disabling the firewall, as well as the antivirus, on the server and having them retry, but that didn't work. They are trying to upload from a desktop to the server and they are on the same network through a gigabit switch. I'm sure I'm missing useful information for you guys but I'm a total noob to web dev, their set up, and actually understanding what they're trying to do. If you need any more information from me I'll be glad to provide it.
The problem could be the too strict timeout options configured in Apache2's reqtimeout module. I simply disabled it
a2dismod reqtimeout
/etc/init.d/apache2 restart
Chocolate to: https://serverfault.com/questions/297562/svn-https-problem-could-not-read-status-line-connection-was-closed-by-ser

FTP suddenly refuses connection after multiple & sporadic file transfers

I have an issue that my idiot web host support team cannot solve, so here it is:
When I'm working on a site, and I'm uploading many files here and there (small files, most of them a few dozen lines at most, php and js files mostly, with some png and jpg files), after multiple uploads in a very short timeframe, the FTP chokes on me. It cuts me off with a "refused connection" error from the server end as if I am brute-force attacking the server, or trying to overload it. And then after 30 minutes or so it seems to work again.
I have a dedicated server with inmotion hosting (which I do NOT recommend, but that's another story - I have too many accounts to switch over), so I have access to all logs etc. if you want me to look.
Here's what I have as settings so far:
I have my own IP on the whitelist in the firewall.
FTP settings have maximum 2000 connections at a time (Which I am
nowhere near close to hitting - most of the accounts I manage
myself, without client access allowed)
Broken Compatibility ON
Idle time 15 mins
On regular port 21
regular FTP (not SFTP)
access to a sub-domain of a major domain
Anyhow this is very frustrating because I have to pause my web development work in the middle of an update. Restarting FTP on WHM doesn't seem to resolve it right away either - I just have to wait. However when I try to access the website directly through the browser, or use ping/traceroute commands to see if I can reach it, there's no problem - just the FTP is cut off.
The ftp server is configured for such a behavior. If you cannot change its configuration (or switch to another ftp server program on the server), you can't avoid that.
For example vsftpd has many such configuration switches.
Going to something else like scp or ssh should help
(I'm not sure that calling idiot your web support team can help you)

Resources