Preserve the modified date of a file during download - node.js

I got feedback to preserve the modified date of downloaded file. I found a way to preserve it if I serve the file inside a zip, however got problem when I just serve the file as it is from my nodejs server.
Below is my current implementation :
try{
var stat = fs.statSync(fullpath);
self.response.writeHead(200, {
'Content-Type': mimeType,
'Last-Modified': stat.mtime // not working
});
var fileStream = fs.createReadStream(fullpath);
fileStream.pipe(self.response);
fileStream.on('end', function() {
console.log("complete")
});
}catch(e)
{ //to handle user cancel the download and bring down whole system
logger.error("streaming failed,because of:"+e.message);
}
Initially I thought that setting the header 'Last-Modified' should do the trick but apparently it is not. Need to be able to work in Chrome, but if it works across browser, it would be great.
Note : It is not because of the format because using "Tue, 15 Nov 1994 12:45:26 GMT" instead of stat.mtime is not working as well.
Update : Seems impossible for browser right now as per early 2017, as shown in this link, the only way to do this is curl, or wget.

Do you mean the modified date on a file when the browser has downloaded and saved it? You can't do that, as that would require operating system access on the remote computer. The modified date on a file is a function of the filesystem on the client computer.

Related

Downloading Binary File from OneDrive API Using Node/Axios

I am using the One Drive API to grab a file with a node application using the axios library.
I am simply trying to save the file to the local machine (node is running locally).
I use the One Drive API to get the download document link, which does not require authentication (with https://graph.microsoft.com/v1.0/me/drives/[location]/items/[id]).
Then I make this call with the download document link:
response = await axios.get(url);
I receive a JSON response, which includes, among other things, the content-type, content-length, content-disposition and a data element which is the contents of the file.
When I display the JSON response to the console, the data portion looks like this:
data: 'PK\u0003\u0004\u0014\u0000\u0006\u0000\b\u0000\u0000\u0000!\u...'
If the document is simply text, I can save it easily using:
fs.writeFileSync([path], response.data);
But if the file is binary, like a docx file, I cannot figure out how to write it properly. Every time I try it seems to have the wrong encoding. I tried different encodings.
How do I save the file properly based on the type of file retrieved.
Have you tried using an encoding option of fs.writeFileSync of explicitly null, signifying the data is binary?
fs.writeFileSync([path], response.data, {
encoding: null
});

How do I cache bust imported modules in es6?

ES6 modules allows us to create a single point of entry like so:
// main.js
import foo from 'foo';
foo()
<script src="scripts/main.js" type="module"></script>
foo.js will be stored in the browser cache. This is desirable until I push a new version of foo.js to production.
It is common practice to add a query string param with a unique id to force the browser to fetch a new version of a js file (foo.js?cb=1234)
How can this be achieved using the es6 module pattern?
There is one solution for all of this that doesn't involve query string. let's say your module files are in /modules/. Use relative module resolution ./ or ../ when importing modules and then rewrite your paths in server side to include version number. Use something like /modules/x.x.x/ then rewrite path to /modules/. Now you can just have global version number for modules by including your first module with
<script type="module" src="/modules/1.1.2/foo.mjs"></script>
Or if you can't rewrite paths, then just put files into folder /modules/version/ during development and rename version folder to version number and update path in script tag when you publish.
HTTP headers to the rescue. Serve your files with an ETag that is the checksum of the file. S3 does that by default at example.
When you try to import the file again, the browser will request the file, this time attaching the ETag to a "if-none-match" header: the server will verify if the ETag matches the current file and send back either a 304 Not Modified, saving bandwith and time, or the new content of the file (with its new ETag).
This way if you change a single file in your project the user will not have to download the full content of every other module. It would be wise to add a short max-age header too, so that if the same module is requested twice in a short time there won't be additional requests.
If you add cache busting (e.g. appending ?x={randomNumber} through a bundler, or adding the checksum to every file name) you will force the user to download the full content of every necessary file at every new project version.
In both scenario you are going to do a request for each file anyway (the imported files on cascade will produce new requests, which at least may end in small 304 if you use etags). To avoid that you can use dynamic imports e.g if (userClickedOnSomethingAndINeedToLoadSomeMoreStuff) { import('./someModule').then('...') }
From my point of view dynamic imports could be a solution here.
Step 1)
Create a manifest file with gulp or webpack. There you have an mapping like this:
export default {
"/vendor/lib-a.mjs": "/vendor/lib-a-1234.mjs",
"/vendor/lib-b.mjs": "/vendor/lib-b-1234.mjs"
};
Step 2)
Create a file function to resolve your paths
import manifest from './manifest.js';
const busted (file) => {
return manifest[file];
};
export default busted;
Step 3)
Use dynamic import
import busted from '../busted.js';
import(busted('/vendor/lib-b.mjs'))
.then((module) => {
module.default();
});
I give it a short try in Chrome and it works. Handling relative paths is tricky part here.
I've created a Babel plugin which adds a content hash to each module name (static and dynamic imports).
import foo from './js/foo.js';
import('./bar.js').then(bar => bar());
becomes
import foo from './js/foo.abcd1234.js';
import('./bar.1234abcd.js').then(bar => bar());
You can then use Cache-control: immutable to let UAs (browsers, proxies, etc) cache these versioned URLs indefinitely. Some max-age is probably more reasonable, depending on your setup.
You can use the raw source files during development (and testing), and then transform and minify the files for production.
what i did was handle the cache busting in webserver (nginx in my instance)
instead of serving
<script src="scripts/main.js" type="module"></script>
serve it like this where 123456 is your cache busting key
<script src="scripts/123456/main.js" type="module"></script>
and include a location in nginx like
location ~ (.+)\/(?:\d+)\/(.+)\.(js|css)$ {
try_files $1/$2.min.$3 $uri;
}
requesting scripts/123456/main.js will serve scripts/main.min.js and an update to the key will result in a new file being served, this solution works well for cdns too.
Just a thought at the moment but you should be able to get Webpack to put a content hash in all the split bundles and write that hash into your import statements for you. I believe it does the second by default.
You can use an importmap for this purpose. I've tested it at least in Edge. It's just a twist on the old trick of appending a version number or hash to the querystring. import doesn't send the querystring onto the server but if you use an importmap it will.
<script type="importmap">
{
"imports": {
"/js/mylib.js": "/js/mylib.js?v=1",
"/js/myOtherLib.js": "/js/myOtherLib.js?v=1"
}
}
</script>
Then in your calling code:
import myThing from '/js/mylib.js';
import * as lib from '/js/myOtherLib.js';
You can use ETags, as pointed out by a previous answer, or alternatively use Last-Modified in relation with If-Modified-Since.
Here is a possible scenario:
The browser first loads the resource. The server responds with Last-Modified: Sat, 28 Mar 2020 18:12:45 GMT and Cache-Control: max-age=60.
If the second time the request is initiated earlier than 60 seconds after the first one, the browser serves the file from cache and doesn't make an actual request to the server.
If a request is initiated after 60 seconds, the browser will consider cached file stale and send the request with If-Modified-Since: Sat, 28 Mar 2020 18:12:45 GMT header. The server will check this value and:
If the file was modified after said date, it will issue a 200 response with the new file in the body.
If the file was not modified after the date, the server will issue a304 "not modified" status with empty body.
I ended up with this set up for Apache server:
<IfModule headers_module>
<FilesMatch "\.(js|mjs)$">
Header set Cache-Control "public, must-revalidate, max-age=3600"
Header unset ETag
</FilesMatch>
</IfModule>
You can set max-age to your liking.
We have to unset ETag. Otherwise Apache keeps responding with 200 OK every time (it's a bug). Besides, you won't need it if you use caching based on modification date.
A solution that crossed my mind but I wont use because I don't like it LOL is
window.version = `1.0.0`;
let { default: fu } = await import( `./bar.js?v=${ window.version }` );
Using the import "method" allows you to pass in a template literal string. I also added it to window so that it can be easily accessible no matter how deep I'm importing js files. The reason I don't like it though is I have to use "await" which means it has to be wrapped in an async method.
If you are using Visual Studio 2022 and TypeScript to write your code, you can follow a convention of adding a version number to your script file names, e.g. MyScript.v1.ts. When you make changes and rename the file to MyScript.v2.ts Visual Studio shows the following dialog similar to the following:
If you click Yes it will go ahead and update all the files that were importing this module to refer to MyScript.v2.ts instead of MyScript.v1.ts. The browser will notice the name change too and download the new modules as expected.
It's not a perfect solution (e.g. if you rename a heavily used module, a lot of files can end up being updated) but it is a simple one!
this work for me
let url = '/module/foo.js'
url = URL.createObjectURL(await (await fetch(url)).blob())
let foo = await import(url)
I came to the conclusion that cache-busting should not be used with ES Module.
Actually, if you have the versioning in the URL, the version is acting like a cache-busting. For instance https://unpkg.com/react#18.2.0/umd/react.production.min.js
If you don't have versioning in the URL, use the following HTTP header Cache-Control: max-age=0, no-cache to force the browser to always check if a new version of the file is available.
no-cache tells the browser to cache the file but to always perform a check
no-store tells the browser to don't cache the file. Don't use it!
Another approach: redirection
unpkg.com solved this problem with HTTP redirection.
Therefore it is not an ideal solution because it involves 2 HTTP requests instead of 1.
The first request is to get redirected to the latest version of the file (not cached, or cached for a short period of time)
The second request is to get the JS file (cached)
=> All JS files include the versioning in the URL (and have an aggressive caching strategy)
For instance https://unpkg.com/react#18.2.0/umd/react.production.min.js
=> Removing the version in the URL, will lead to a HTTP 302 redirect pointing to the latest version of the file
For instance https://unpkg.com/react/umd/react.production.min.js
Make sure the redirection is not cached by the browser, or cached for a short period of time. (unpkg allows 600 seconds of caching, but it's up to you)
About multiple HTTP requests: Yes, if you import 100 modules, your browser will do 100 requests. But with HTTP2 / HTTP3, it is not a problem anymore because all requests will be multiplexed into 1 (it is transparent for you)
About recursion:
If the module you are importing also imports other modules, you will want to check about <link rel="modulepreload"> (source Chrome dev blog).
The modulepreload spec actually allows for optionally loading not just the requested module, but all of its dependency tree as well. Browsers don't have to do this, but they can.
If you are using this technic in production, I am deeply interested to get your feedback!
Append version to all ES6 imports with PHP
I didn't want to use a bundler only because of this, so I created a small function that modifies the import statements of all the JS files in the given directory so that the version is at the end of each file import path in the form of a query parameter. It will break the cache on version change.
This is far from an ideal solution, as all JS file contents are verified by the server on each request and on each version change the client reloads every JS file that has imports instead of just the changed ones.
But it is good enough for my project right now. I thought I'd share.
$assetsPath = '/public/assets'
$version = '0.7';
$rii = new RecursiveIteratorIterator(new RecursiveDirectoryIterator($assetsPath, FilesystemIterator::SKIP_DOTS) );
foreach ($rii as $file) {
if (pathinfo($file->getPathname())['extension'] === 'js') {
$content = file_get_contents($file->getPathname());
$originalContent = $content;
// Matches lines that have 'import ' then any string then ' from ' and single or double quote opening then
// any string (path) then '.js' and optionally numeric v GET param '?v=234' and '";' at the end with single or double quotes
preg_match_all('/import (.*?) from ("|\')(.*?)\.js(\?v=\d*)?("|\');/', $content, $matches);
// $matches array contains the following:
// Key [0] entire matching string including the search pattern
// Key [1] string after the 'import ' word
// Key [2] single or double quotes of path opening after "from" word
// Key [3] string after the opening quotes -> path without extension
// Key [4] optional '?v=1' GET param and [5] closing quotes
// Loop over import paths
foreach ($matches[3] as $key => $importPath) {
$oldFullImport = $matches[0][$key];
// Remove query params if version is null
if ($version === null) {
$newImportPath = $importPath . '.js';
} else {
$newImportPath = $importPath . '.js?v=' . $version;
}
// Old import path potentially with GET param
$existingImportPath = $importPath . '.js' . $matches[4][$key];
// Search for old import path and replace with new one
$newFullImport = str_replace($existingImportPath, $newImportPath, $oldFullImport);
// Replace in file content
$content = str_replace($oldFullImport, $newFullImport, $content);
}
// Replace file contents with modified one
if ($originalContent !== $content) {
file_put_contents($file->getPathname(), $content);
}
}
}
$version === null removes all query parameters of the imports in the given directory.
This adds between 10 and 20ms per request on my application (approx. 100 JS files when content is unchanged and 30—50ms when content changes).
Use of relative path works for me:
import foo from './foo';
or
import foo from './../modules/foo';
instead of
import foo from '/js/modules/foo';
EDIT
Since this answer is down voted, I update it. The module is not always reloaded. The first time, you have to reload the module manually and then the browser (at least Chrome) will "understand" the file is modified and then reload the file every time it is updated.

Vimeo API: Upload from a File Using a Form

I followed the docs for the vimeo node.js api to upload a file. It's quite simple and I have it working by running it directly in node, with the exception that it requires me to pass the full path of the file I want to upload. Code is here:
function uploadFile() {
let file = '/Users/full/path/to/file/bulls.mp4';
let video_id; //the eventual end URI of the uploaded video
lib.streamingUpload(file, function(error, body, status_code, headers) {
if (error) {
throw error;
}
lib.request(headers.location, function (error, body, status_code, headers) {
console.log(body);
video_id = body.uri;
//after it's done uploading, and the result is returned, update info
updateVideoInfo(video_id);
});
}, function(upload_size, file_size) {
console.log("You have uploaded " +
Math.round((upload_size/file_size) * 100) + "% of the video");
});
}
Now I want to integrate into a form generated in my react app, except that the result of evt.target.files[0] is not a full path, the result is this:
File {name: "bulls.mp4", lastModified: 1492637558000, lastModifiedDate: Wed Apr 19 2017 14:32:38 GMT-0700 (PDT), webkitRelativePath: "", size: 1359013595…}
Just for the sake of it, I piped that into my already working upload function and it didn't work for the reasons specified. Am I missing something? If not I just want to clarify what I actually have to do then. So now I'm looking at the official Vimeo guide and wanted to make sure that was the right road to go down. See: https://developer.vimeo.com/api/upload/videos
So if I'm reading it right, you do several requests to achieve the same goal?
1) Do a GET to https://api.vimeo.com/me to find out the remaining upload data they have.
2) Do a POST to https://api.vimeo.com/me/videos to get an upload ticket. Use type: streaming if I want a resumable upload such as those provided by the vimeo streamingUpload() function
3) Do a PUT to https://1234.cloud.vimeo.com/upload?ticket_id=abcdef124567890
4) Do a PUT to https://1234.cloud.vimeo.com/upload?ticket_id=abcdef124567890 but without file data and the header Content-Range: bytes */* anytime I want to check the bytes uploaded.
Sound right? Or can you simply use a form and I got it wrong somewhere. Let me know. Thanks.
There's some example code in this project that might be worth checking out: https://github.com/websemantics/vimeo-upload.
Your description is mostly correct for the streaming system, but I want to clarify the last two points.
3) In this step, you should make a PUT request to that url with a Content-Length header describing the full size of the file (as described here: https://developer.vimeo.com/api/upload/videos#upload-your-video)
4) In this step, the reason you are checking bytes uploaded is if you have completed the upload, or if your connection in the PUT request dies. We save as many bytes possible, and we will respond to the request in step 4. with how many bytes we received. This lets you resume step 3 where you left off instead of at the very beginning.
For stability we highly recommend the resumable uploader, but if you are looking for simplicity we do offer a simple POST uploader that uses an HTML form. The docs for that are here: https://developer.vimeo.com/api/upload/videos#simple-upload

List&Label Webreporting: Export/Printing does not work on IIS Server

Hi and thank you in advance for any help,
I am trying to post a JSON-Object to an ASP.Net MVC-Server via JQuery/Ajax. The Controller method is supposed to take the JSON input and use it as a DataProvider for List&Label 22. The Report should then be generated and offered to the user as a PDF file for download.
Since I want the structure of the JSON Object to be generic, I don't want to create a specific model in ASP.Net for this request, but rather pass over the JSON object as a string (I know that I might run into some size restrictions, but I will worry about that later :) ).
Here is my POST request:
<script>
function getReport() {
//dummy data
data = { JsonVariable1: 1, JsonVariable2: "JsonVariable2" };
var dataSource = JSON.stringify(data);
$.ajax({
type: "POST",
dataType: "text",
url: "#Url.Action("/JsonTest")",
data: "aDataSource=" + dataSource,
async: false,
success: function (result) {
alert('Success!');
}
});
}
And this is the Controller method:
[HttpPost]
public ActionResult JsonTest(string aDataSource) {
combit.ListLabel22.ListLabel vLL = new combit.ListLabel22.ListLabel();
JsonDataProvider vJsonProvider = new JsonDataProvider(aDataSource);
vLL.FileRepository = GetCurrentRepository();
vLL.AutoProjectFile = mReportRepositoryId;
vLL.DataSource = vJsonProvider;
vLL.ExportOptions.Add(LlExportOption.ExportTarget, "PDF");
vLL.Print(); //This causes problem on published server
return Json("Success");
}
Locally, i.e. in my Visual Studio (2015) dev environment this works fine. However, when I publish the code to my IIS-Server, the POST-request doesn't terminate. I have found this line
vLL.Print();
to be the problem. If I comment out this line, the request terminates as expected. This line generates the report and exports it to a PDF which will in turn be offered to the user as a download.
I'm using IIS 8.5 and .NET-Framework 4.5 on a Machine running Windows Server 2012 R2. A Printer Driver is installed and the regular List&Label funcationality is working (e.g. starting the web designer, previewing reports via HTML, etc).
Does anyone have any idea what I am missing here? I am not a web developer, and I may also have forgotten to adjust some configurations on my IIS Server.
Thanks!
Just calling the Printmethod is not enough as you need to tell List & Label where to generate the report. I'd rather use the Export method (as it wires up a number of convenient things like muting the file dialogs) in this way:
string reportResult = "Report." + Guid.NewGuid() + ".pdf";
string outputFile = Server.MapPath("~/exports/") + reportResult;
ExportConfiguration exportConfiguration = new ExportConfiguration(LlExportTarget.Pdf, outputFile, mReportRepositoryId);
vLL.Export(exportConfiguration);
You should then find your PDF in the "exports" path of your web application on the server. To troubleshoot issues like this you can use the provided debugging tool Debwin4. It shows you all calls to the API and hints on missing options or input.

Unable to open local file using cordova inappbrowser on windows 8.1 platform

I am developing a phone gap application and we've recently added support for the windows 8.1 platform. The application downloads/creates files which are saved to the device using the Cordova FileSystem API.
I have successfully saved a file to the device using a URL which looks like this
ms-appdata:///local/file.png
I have checked on my PC and the file is viewable inside the LocalState folder under the app's root folder. However, when I try to open this file using inAppBrowser nothing happens; no error message is being reported and none of the inAppBrowser default events fire.
function empty() { alert('here'); } //never fires
var absoluteUrl = "ms-appdata:///local/file.png";
cordova.InAppBrowser.open(absoluteURL, "_blank", "location=no", { loadstart: empty, loadstop: empty, loaderror: empty });
I have verified that the url is valid by calling the following built-in javascript on the url
Windows.Storage.StorageFile.getFileFromApplicationUriAsync(uri).done(function (file) {
debugger; //the file object contains the correct path to the file; C:\...etc.
});
Also, adding the url as the src for an img tag works as expected.
I have also tried attaching the inAppBrowser handlers using addEventListener("loadstart") etc. but none of them are firing either. However, when I try to open "http://www.google.com" the events do fire and the inAppBrowser pops up on the screen.
After inspecting the dom I can see that the inAppBrowser element has been added, but it doesn't appear to have a source attribute set
<div class="inAppBrowserWrap">
<x-ms-webview style="border-width: 0px; width: 100%; height: 100%;"></x-ms-webview>
</div>
I have looked at other questions such as this one but to no avail. I have verified that
a) InAppBrowser is installed
b) deviceReady has fired
I have also tried changing the target to "_self" (same issue) and "_system" (popup saying you need a new app to open a file of type msappdata://) and I'm running out of ideas. Has anybody come across similar issues?
I had a similar problem. My cordova app downloads a file and then opens it with native browser (so that images, PDF files and so on are properly handled).
In the end I had to modify InAppBrowserProxy.js class (part of InAppBrowser plugin for Windows platform).
This is the code that opens the file (plain JavaScript):
// This value comes from somewhere, I write it here as an example
var path = 'ms-appdata:///local//myfile.jpg';
// Open file in InAppBrowser
window.open(path, '_system', 'location=no');
Then, I updated InAppBrowserProxy.js file (under platforms\windows\www\plugins\cordova-plugin-inappbrowser\src\windows). I replaced this code fragment:
if (target === "_system") {
url = new Windows.Foundation.Uri(strUrl);
Windows.System.Launcher.launchUriAsync(url);
}
By this:
if (target === "_system") {
if (strUrl.indexOf('ms-appdata:///local//') == 0) {
var fileName = decodeURI(strUrl.substr(String(strUrl).lastIndexOf("/") + 1));
var localFolder = Windows.Storage.ApplicationData.current.localFolder;
localFolder.getFileAsync(fileName).then(function (file) {
Windows.System.Launcher.launchFileAsync(file);
}, function (error) {
console.log("Error getting file '" + fileName + "': " + error);
});
} else {
url = new Windows.Foundation.Uri(strUrl);
Windows.System.Launcher.launchUriAsync(url);
}
}
This is a very ad-hoc hack, but it did the trick for me, and it could be improved, extended, and even standarized.
Anyway, there may be other ways to achieve this, it's just that this worked for me...
After more searching, it seems that the x-ms-webview, which is the underlying component used by PhoneGap for Windows only supports loading HTML content. This Microsoft blog post on the web view control states that
UnviewableContentIdentified – Is fired when a user navigates to
content other than a webpage. The WebView control is only capable of
displaying HTML content. It doesn’t support displaying standalone
images, downloading files, viewing Office documents, etc. This event
is fired so the app can decide how to handle the situation.
This article suggests looking at the Windows.Data.Pdf namespace for providing in-app support for reading PDFs.

Resources