Notify service that synchronizing complete - linux

There is a web service that saves, among other things, files to file storage on linux os.
To access these files, it is periodically copied to another server using lsync.
Access is provided through nginx. Synchronization occurs every 10 seconds.
The problem is that client service could request files that just been saved and get 404.
Is it possible to notify the web service that synchronization has occurred and files are available for clients using the lsync functionality?
if not, then how to solve this problem?
Thank you.

Related

saving Image to file system in Asp.net Core

I'm building an application that saves a lot of images using the C# Web API of ASP.NET.
The best way seems to save the images in the file system and save their path into the database.
However, I am concerned about load balancing. Since every server will put the image in its own file system, how can another server behind the same load balancer retrieve the image?
If you have resources for it, I would state that:
the best way is to save them in the file system and save the image path into the database
Is not true at all.
Instead, Id say using an existing file server system is probably going to produce the best results, if you are willing to pay for the service.
For dotnet the 'go to' would be Azure Blob Storage, which is ideal for non-streamed data like images.
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-dotnet
Otherwise, you can try and create your own file storage service from scratch. In which case you will effectively be creating a separate service API apart from your main cluster that handles your actual app, this secondary API just handles file storage and runs on its own dedicated server.
You then simply just create an association between Id <-> File Data on the File Server, and you're App Servers can request and push files to the File Server via those Ids.
Its possible but a File Server is for sure one of those types of projects that seems straightforward at first but very quickly you realize its a very very difficult task and it may have been easier to just pay for an existing service.
There might be existing self hosted file server options out there as well!

Log FTP connection failures

I have a FTP server (IIS) in cloud. It deals with files (text) having size in GB sometimes. Customers are complaining about the connection failures or download/upload failures.
Is there any way I can log any failed (negative) action performed with my FTP server?
I have tried IFtpLogProvider in .Net but it does not give me valid FTP Status.
for example, if I start an upload & download from client & disconnect the network, still it records status as 226 which is successful transfer.
Either I am missing something with IFtpLogProvider or I have misunderstood the codes.
Is there any other way to record all the FTP transactions which will allow me to investigate the issue being faced by my customers?
I made a silly mistake. I did not enable FTP Extensibility in Windows Features within IIS. Once enabled, it started working.

Directory / File Monitoring on Windows Server

Would anyone happen to know of an application / project that is able to monitor a series of directories on a web server.
We currently develop sites using Coldfusion 10, and would like a method or script or even application that actively monitors websites for any modifications to any files and automatically notifies administrators of any time someone or something has made an alteration to a file.
If Coldfusion is possible for this, that would be even better and any advice on how to to monitor directories would also be greatly appreciated
There's an example of a directory watcher gateway in the docs: "Using the example event gateways and gateway applications".
Google about for issues people have had with it though. before diving in.

Monitoring file access in Linux

For an application I'm writing, I want to know which all processes are accessing a particular file and dump that information into a Log file. In the end one of the processes will be deleting this file, I would want to know the Process name for that too.
I can use the INotify library to monitor the file access, but it does not give me the process name which is accessing the file. This might be possible using the Auditctl package on linux as well but I can't use this option as well :-(
Actually it is a controlled environment for some reasons the end customer is ready to run a program but not ready to install new packages or make changes to the existing utilities.
It is not possible to reliably audit directly attached file access in Linux from userspace alone.
You could poll with lsof but you would risk not detecting accesses between polling. The purpose of the original dnotify module (obsoleted by inotify...) was to avoid having to incur the overhead of polling and to avoid loosing events. The audit system gives user identification at the time of file open.
If you can move the file to an NFS server, then you can use the NFS logging to record access to the file.
The customer could be correct about not installing new packages if this is a production server or if it is a development server that is about to go live. You should consider asking for authorization to set up auditing on the next development or testing server.

cloudfoundry: how to use filesystem

I am planning to use cloudfoundry paas service (from VMWare) for hosting my node.js application. I have seen that it has support for mongo and redis in the service layer and node.js framework. So far so good.
Now I need to store my mediafiles(images uploaded by users) to a filesystem. I have the metadata stored in Mongo.
I have been searching internet, but have not yet got good information.
You cannot do that for the following reasons:
There are multiple host machines running your application. They each have their own filesystems. Each running process in your application would see a different set of files.
The host machines on which your particular application is running can change moment-to-moment. Indeed, they will change every time you re-deploy your application. Every time a process is started on a new host machine, it will see an empty set of files. Every time a process is stopped on an old host machine, all the files would be permanently deleted.
You absolutely must solve this problem in another way.
Store the media files in MongoDB GridFS.
Store the media files in an object store such as Amazon S3 or Rackspace Cloud Files.
Filesystem in most cloud solutions are "ephemeral", so you can not use FS. You will have to use solutions like S3/ DB for such purpose

Resources