Retain file server's original creation / modified dates of a file when downloading it from a website - linux

How can I keep the original 'Created' and 'Modified' date and timestamps of a file when I download it from a website?
When I download a file from a website, the 'Created' and 'Modified' dates are taken at the time of the download. I want to have the original values from the file server on my downloaded file.
In Windows, there is a tool that can do this called the Internet Download Manager (IDM) but is there something that can do the same for Linux and Mac OSX?
I know that it can also depend on the file system that the file server uses in order to interpret date and timestamps for files. For example, a Windows-based file server will probably be using NTFS so it's interpretation of a date and timestamp of it's files might be different to that of a Linux-based server. I don't know if this will have any impact on the end user being able to download the original dates and times, regardless of which file server they are downloading them from.

wget can retrieve the timestamp where possible - it's completely dependable on the file server and how it sends the file down. For example, I cannot retrieve the timestamp of a file if I download it from the Internet Archive (http://www.archive.org). This site does not provide the original file creation date and timestamp.

Related

Cognos Analytics - save weekly reports to local drive

In the current situation weekly scheduled reports are saved to the Cognos server. However, I was asked if there is a way to save them to a local network drive. Is this possible (documentation on this issue will help too)? We're using Cognos Analytics 11.0.13 on-premise.
Thanks in advance!
Yes. There are two ways.
Using Archive Location File System Root:
Within the Archive Location File System Root (in Configuration), a folder is created for a user or business unit to write to. In my case (on MS Windows) I use a junction pointing to their folder on the network.
In Define File System Locations (in Cognos Administration), I configure an alias to the folder and apply permissions for certain users, groups, or roles to use (write to) that alias.
Users schedule or use "run as" to get their reports to run and write output to their location.
Using CM.OutPutLocation and CM.OutputScript:
For the folder or report in Cognos, apply permissions for certain users, groups, or roles to create versions. (write or full permission?)
Users schedule or use "run as" to save their report output as a version of the report.
Cognos saves a copy to the folder defined in CM.OutPutLocation (in Cognos Administration).
The script defined in CM.OutputScript runs. This script can:
Identify which file was just created.
Read the description (xml) file to determine the report path.
If the report path matches a specific pattern or value, do something (like copy it to the subfolder/junction and rename it to something meaningful like the report name).
Delete the original output file.
The second option seems like more coding, but far more flexible. It must be a .bat file, but that can call anything (PowerShell, Windows Scripting Host, executable) to do the actual work, so a thorough knowledge batch programming isn't even needed.
Down the line, if the requirement changes from save to local file system and becomes save to cloud. There is also this functionality which has been added:
https://www.ibm.com/support/knowledgecenter/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ag_manage.doc/t_ca_manage_cloud_storage_add_location.html

How to get the internal table data in excel format if the report runs in background?

Could you please tell me the way to get internal table data in excel file to be saved in local desktop (presentation server). Remember that the report runs in background.
Both requirements a contradictionary because a backround job does not have any connection to any SAPGui. I.e. there is no SAPGui connection linked to a background job ans thus it is not possible to determine onto which local desktop the excel file should be saved.
A possibility would be to store the results that are created in the backround job somehow and externalize the save to a local desktop into another program. When creating files on the SAP AS you should keep in mind what happens with these file after they are not needed any longer. I.e. you need to do the file maintenance (deletion of files after they are not needed any longer) your self.
Theoretically you can do this, if you create the file on a SAP AS and move this file using any shell file move command. However, it's a bad practice to make any connection from SAP AS to a user's machine.
The best solution here is to create the file on SAP AS. The user must download the file manually from the SAP AS. You can also send this file to a user for example per e-mail. The user will do no manual work in the last case.
Also a good practice is to use PI server functionality. It can deliver your file within a way the user wants to have.

Linux: Traverse Server Directory and build List of Checksums for all Files

I am running a web server with several CMS sites.
To be aware of hacks on my web server, I am looking for a mechanism, by the help of which I can detect changed files in my web server.
I think of a tool / script, which traverses the directory structure, builds a Checksums for each file and writes out a list of files, with file size, last modified date and checksums.
At the next execution, I would then be able to compare this list with the previous one and detect new or modified files.
Dies anyone know a script or tool, which can accomplish this?

Transfer zip file to web server

I would like to develop an app that targets everything from Gingerbread(version 2.3 API 9) to JellyBean(version 4.3 API 18).
The problem:
I need to transfer large images(40 to 50 at a time) either independently or in a zip file without the user having to click on each file being transferred. As far as I can tell I need to use the HttpClient(org.apache) that was deprecated after JellyBean.
Right now the application takes the images and zips them to a zip file prior to uploading. I can create additional zip files, for example if I have 50MB to transfer I can make each zip file about 10MB and have 5 files to be transferred if I have to. I need to transfer these files to a web server. I cant seem to find anything about transferring files after Jellybean. All the searching I've done uses the deprecated commands and the posts are 2-5 years old. I have installed andftp and transferred a 16MB zip file last night that was created by my app, but I really don't want to use that as it will require additional steps from the user. I will try andftp today and setup an intent to transfer the files to see how that works out. Supposedly andftp works until Lollipop(5.0). If there is an easier way please let me know, hopefully I've missed something about transferring files. Is there another way to do this after JellyBean?

Schedule weekly Excel file download to an unique name

We got a database where every Monday a Excel file gets uploaded from a client. The file is always the same name so if we forgot it we lost it. Is there a way how we can make a script that renames the script and gives it the date or a number?
We're using FileZilla to get the files now.
FileZilla does not allow any kind of automation.
You can use this WinSCP script:
open ftp://user:password#host/
get "/path/sheet.xls" "c:\archive\sheet-%TIMESTAMP#yyyy-mm-dd%.xls"
exit
The script connects to the server, downloads the sheet to a local archive under a name like sheet-YYYY-MM-DD.xls.
For details see:
Automate file transfers (or synchronization) to FTP server or SFTP server
Downloading file to timestamped-filename
Then create a task in Windows scheduler to run the winscp.exe every Monday with arguments:
/script="c:\path_to_script\script.txt" /log="c:\path_to_script\script.log"
Logging (/log=...) is optional, but it's recommended.
For details, see Schedule file transfers (or synchronization) to FTP/SFTP server.
(I'm the author of WinSCP)

Resources