VC++ CLI/CLR delete files/folder over the network - visual-c++

I wish to delete files from a network Pc. The user has full control over a shared folder on the PC from which to delete files.
I have this code :
if(status)
{
if(File::Exists(selectedfile))
System::IO::File::Delete(selectedfile);
else
MessageBox::Show("File does not exist.");
}
else
{
if(!System::IO::Directory::Exists(selectedfile))
MessageBox::Show("The directory does not exists.");
try{
System::IO::Directory::Delete(selectedfile,true);
if(System::IO::Directory::Exists(selectedfile))
{
deleted =false;
System::IO::Directory::Delete(selectedfile,true);
}
else
deleted = true;
}
I included the second delete in the Directory loop because the folder is not deleted at first attempt, only the files inside the folder are deleted. However, I get an Access denied whenever I try to delete the empty folder.
How to make sure that the directory and all it's content are deleted.

This is quite normal, one of the things that a multi-tasking operating system needs to do. The directory is in fact marked for deletion but it cannot be removed yet because one or more processes has a handle open on the directory. In the case of Windows, that is commonly a process that uses the the directory as its default working directory. Or maybe you've got an Explorer window open, looking at how your program is doing its job. Explorer uses ReadDirectoryChangesW() to get notified about changes in the directory so it knows when to refresh the view.
The directory will be physically removed from the drive as soon as the last handle is closed. While it exists in this zombified state, any attempt to do anything with the directory will produce an access error (Windows error code 5).
You'll need to account for this behavior in your program. Definitely remove the second Directory::Exists() test, when you didn't get an exception from the Delete call you'll need to assume that the directory got deleted. That will be accurate, eventually.

You need file server functionality on computer A and B and write a client on computer C.
The server could be a kind of FTP server, where you have to explicity configure which directories are handled on both sites.
The server can be a Windows share. You can use UNC file names to address these files and use the Windows API on computer C. When you have mapped network drives at computer C you can work with the network files as you would do with local files.
The computers A and B must be configured so that there are shares with sufficient rights.

Related

How to correctly construct path to local folder when scp from remote to local

I'm trying to download a file from my remote server on DigitalOcean, to my local machine on Windows. I haven't been able to figure out how to correctly specify the path to my local destination without getting a "No such file or directory" error. My Windows user is "Firstname Lastname" and some error messages seem to indicate that it doesn't know how to handle the space in the name. This questions has been asked multiple times, but they all use example paths. Here are some example I have tried to do not work:
user#ipaddress:/var/www/html/wp-content/themes/akd/css/overwrite.css C:/Users/Firstname Lastname/Desktop
C:/Users/FirstnameLastname/Desktop
/Users/Firstname Lastname/Desktop
Users/Firstname Lastname/Desktop
Does anyone know the correct way to handle this situation?

I need to access log files from my network file explorer and write the output

I need to access log files from my network folder through Groovy script I'm very new to the groovy scripting. Please help me
Here I'm using Ready API
Expected result is to access my network log file and print the error logs
well,
If you are running the project with a credential(windows user) that has access to the required file, you can access directly.
String fileContents = new File('<>').text
Assert the content as you like.
If you don't have direct access, first you need to find a user that has the access.
Then you can do one of 2 options:
Add a network drive and use the network drive as a local drive.
Import libraries that use protocols SMB1 or SMB2 like jcifs, or similar. ( avoid this )
PS: It's not a good practice to use a groovy script to access a network folder, whatever the need that you have.

Export report to Excel

I want to export a table to an Excel file. I need to export a report.
ORA_EXCEL.new_document;
ORA_EXCEL.add_sheet('Sheet name');
ORA_EXCEL.query_to_sheet('select * from mytable');
ORA_EXCEL.save_to_blob(myblob);
I saved my table to blob.
How do I export/respond to the user (client)?
I need something that is simple to allow a user to be able to download an Excel file to their own computer. I tried doing this procedure in an Oracle workflow:
ORA_EXCEL.save_to_file('EXPORT_DIR', 'example.xlsx');
But this did not help, because it is saves the file to a directory on the server and I need it in the real server.
The way I have handled similar issues in the past was to work with the systems people to mount a directory from either a web server or file server on the Database server.
Then create a directory object so that the procedure can save to a location that is accessible to the user.
If the files are not sensitive and there are a limited number of users then a file server makes sense as it is then just a matter of giving the user access to the file share.
If files are sensitive or this is a large number or unknown users we then used the Web server and sent a email with a link to the user enabling them to download their file. Naturally there needs to be security built into this to stop people being able to download other users files.
We didn't just email the files as an attachment because...
1) Emails with attachments tend to get blocked
2) We always advise not to open attachments on emails. (Yes I know we advise not to click on links as well but nothing is perfect)
Who or what is invoking the production of the document?
If it´s done by an application, which the user is working on, this application can fetch the BLOB, stores it at f.e. TEMP-Directory and calls
System.Diagnostics.Process.Start("..."); to open it with the associated application. (see Open file with associated application)
If it´s a website, this one could stream the blob back as Excel-Mimetype (see Setting mime type for excel document)
Also you could store in an Oracle-DIRECTORY, but this one has to be on the server and should be a netword-share to be accessible for clients (which is rarely accepted in a productive environment!)
If MAIL isn´t the solution, then maybe FTP can be a way to store files in a common share. See UTL_TCP - Package, with this a FTP-transfer can be achieved (a bit hard to code, but there are solutions to find in the web) and I guess, professional tools that generate Office-documents out of Oracle-DB and distribute them do it like this.

Why save file with original name is dangerous in an upload?

Currently I'm working on a web project (Classic Asp) and I'm going to make an upload form.
Folklore says:
"Don't use the real name to save the uploaded files"
.
What are the problems, dangers, from the security point of view ?
Proper directory permissions should stop most of this stuff but I suppose for file names a potential danger is that they could name it something like "../Default.asp" or "../Malware.asp" or some other malicious path attempting to overwrite files and/or have an executable script on your server.
If I'm using a single upload folder, I always save my users uploads with a GUID file name just because users aren't very original and you get name conflicts very often otherwise.

What is the correct way to setup linux users & groups when multiple users rsync to the same remote location?

I have multiple developers trying to rsync files (symfony php projects) to the same remote location. This has been setup as follows:
Each user has their own login on the remote server
Each user is a member of the same group on the server, say "mygroup"
Files locally and at the rsync destination are owned by a user and the group. E.g. someuser:mygroup
As far as I am aware you must own a directory in order to set its access and modification times to an arbitrary value, being a member of the owning group is not enough. For this reason if user A tries to rsync directories owned by user B rsync outputs the following errors:
rsync: failed to set times on "/some/path": Operation not permitted (1)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1058) [sender=3.0.5]
So, what is the correct way to setup users & groups when multiple users rsync to the same remote location?
What Let_Me_Be said then deploy from Git (or Mercurial) to testing or staging then rsync from there to live. Better still use something like Hudson/Jenkins to manage the whole shooting match for you.

Resources