Issue while addding files in IPFS server having cloudflare enabled - node.js

I have a ubuntu server having cloudflare enabled in it. I have set up the nginx for reverse proxy, I have defined the ports and done IPFS setup.
While adding the file on the server I am getting the error that says:
Unexpected token < in JSON at position 0
I am creating a buffer of a file and adding the data using ipfs.files.add function.
Here is the code snippet:
let testBuffer = new Buffer(testFile);
ipfs.files.add(testBuffer, function (err, file) {
if (err) {
console.log(err);
res.send(responseGenerator.getResponse(500, "Failed to process your request!!!", []));
}
console.log(file)
}
Kindly help with the solution.
Thanks.

Related

Stream file through SFTP connection to client

I'm writing a Node Express server which connects via sftp to a file store. I'm using the ssh2-sftp-client package.
To retrieve files it has a get function with the following signature:
get(srcPath, dst, options)
The dst argument should either be a string or a writable stream, which will be used as the destination for a stream pipe.
I would like to avoid creating a file object at my server and instead transfer the file onto my client to save memory consumption as described in this article. I tried to accomplish this with the following code:
const get = (writeStream) => {
sftp.connect(config).then(() => {
return sftp.get('path/to/file.zip', writeStream)
});
};
app.get('/thefile', (req, res) => {
get(res); // pass the res writable stream to sftp.get
});
However, this causes my node server to crash due to a unhandled promise rejection. Is what I am attempting possible? Should I store the file on my server machine first before sending to the client? I've checked the documentation/examples for the sftp package in question, but cannot find an example of what I am looking for.
I found the error, and it's a dumb one on my part. I was forgetting to end the sftp connection. When this method was called a second time it was throwing the exception when it tried to connect again. If anyone finds themselves in the same situation remember to end the connection once you're finished with it like this:
const get = (writeStream) => {
sftp.connect(config).then(() => {
return sftp.get('path/to/file.zip', writeStream);
}).then(response => {
sftp.end();
resolve(response);
});
};

NodeJS, bypass linux hosts file

Is there a way in NodeJS to bypass linux's /etc/hosts file?
For example, if I have an entry in my hosts file like:
127.0.0.1 example.com
How would I access the 'real' example.com from NodeJS?
I don't want to temporarily modify /etc/hosts to do this, as it can bring about some problems and is not a clean solution.
Thanks in advance!
I didn't think was possible at first but then I stumbled upon a way to ignore the hosts file on Linux. Additionally, I found a DNS API built into Node. Basically, by default, Node defers to the operating system to do a DNS lookup (which will read from /etc/hosts and not make a DNS query at all if it doesn't have to). However, Node also exposes a method to resolve hostnames by explicitly making DNS requests. This will give you the IP address you're looking for.
$ cat /etc/hosts
<snip>
127.0.0.1 google.com
$ ping google.com
PING google.com (127.0.0.1) 56(84) bytes of data.
...
That shows the host file is definitely working.
const dns = require('dns')
// This will use the hosts file.
dns.lookup('google.com', function (err, res) {
console.log('This computer thinks Google is located at: ' + res)
})
dns.resolve4('google.com', function (err, res) {
console.log('A real IP address of Google is: ' + res[0])
})
This outputs different values as expected:
$ node host.js
This computer thinks Google is located at: 127.0.0.1
A real IP address of Google is: 69.77.145.253
Note that I tested this using the latest Node v8.0.0. However, I looked at some older docs and the API existed since at least v0.12 so, assuming nothing significantly changed, this should work for whatever version of Node you happen to running. Also, resolving a hostname to an IP address might be only half the battle. Some websites will behave strangely (or not at all) if you try accessing them through an IP directly.
Based on #supersam654 and this: my solution (full example) with .get request (igrnore hosts with all requests):
const dns = require("dns");
const url = require("url");
const req = (urlString, cb) => {
const parsedUrl = url.parse(urlString);
const hostname = parsedUrl.hostname;
dns.resolve4(hostname, function(err, res) {
if (err) throw err;
console.log(`A real IP address of ${hostname} is: ${res[0]}`);
const newUrl = urlString.replace(`${parsedUrl.protocol}//${hostname}`, `${parsedUrl.protocol}//${res[0]}`);
https
.get(
newUrl,
{
headers: { host: hostname },
servername: hostname
},
resp => {
let data = "";
// A chunk of data
resp.on("data", chunk => {
data += chunk;
});
// The whole response has been received. Print out the result.
resp.on("end", () => {
cb(data);
});
}
)
.on("error", err => {
console.log("Error request " + url + ": " + err.message);
});
});
};
// Example
req("https://google.com/", data => console.log(data));

Sending data to ActiveMQ via node.js/stompit

hope someone out there can help me on this one!
Task:
Send xml files to ActiveMQ.
Environments:
Developing:
OS X 10.10.5
node 4.4.3
stompit 0.25.0
Production:
Ubuntu 16.04
node 7.8.0 (tried 4.4.3 too with same results)
stompit 0.25.0
I'm always connecting this way.
var server1 = { 'host': 'activemq-1.server.lan' };
var server2 = { 'host': 'activemq-2.server.lan' };
var servers = [server1, server2];
var reconnectOptions = { 'maxReconnects': 10 };
var manager = new stompit.ConnectFailover(servers, reconnectOptions);
Headers, i set for each frame:
const sendHeaders = {
'destination' : '/queue/name_of_queue',
'content-type' : 'text/plain',
'persistent' : 'true'
};
I'm not allowed to set the content-length header, as this would force the server to interpret the stream as a binary stream.
When connected to the server, i connect to a PostgreSQL server to fetch the data to send.
What works:
var frame = client.send(sendHeaders);
frame.write(row.pim_content);
frame.end();
But it only works at the developing machine. When running this in production environment, the script runs without throwing errors, but never sends the message to the server.
So I tried a different method, just to have a callback when the server receives a message.
var channel = new stompit.Channel(manager);
channel.send(sendHeaders, xml_content, (err)=>{
if(err){
console.log(err);
} else {
console.log('Message successfully transferred');
}
});
Now i get the same results for production and development. It is working as expected, but ...
It only works as long as the body (xml_content) has a maximum length of 1352 characters. When adding an additional character, the callback of channel.send() is never going to be fired.
I'm running out of ideas what to check/test next to get that thing working. I hope someone is reading this, laughing and pointing me to the right direction. Any ideas greatly appreciated!
Thanks in advance,
Stefan

Couchbase Multi-Server Setup Issue

We have our couchbase server setup with three EC2 instances, first instance only has the database service running, second instance has the index service running & third instance has query service running.
The index & query servers are added to the data server using couchbase web console which has option to "Add Servers" under "Server Nodes" option referenced from this article.
Now, for example, If I have to connect to the bucket residing on the server using Nodejs SDK, Ottoman and create a new user then it is able to connect to the bucket however it is not able to save the document in the bucket and gives me a "segmentation fault (core dumped)" error.
Please let us know if we need make any changes to the way servers are setup or how should we go ahead with above example so that we are able to create user.
Software Versions:
Couchbase : 4.5
Couchbase Nodejs SDK : 2.2
Ottoman : 1.0.3
This function is running from AWS Lambda using Nodejs ver-4.3.
The error I am getting is "Segmentation Fault(core dumped)".
Below is the AWS Lambda function that I have tried:
var couchbase=require('couchbase');
var ottoman=require('ottoman');
var config = require("./config");
var myCluster = new couchbase.Cluster(config.couchbase.server); // here tried connecting to either data / index / query server
ottoman.bucket = myCluster.openBucket(config.couchbase.bucket);
require('./models/users');
ottoman.ensureIndices(function(err) {
if (err) {
console.log('failed to created neccessary indices', err);
return;
}
console.log('ottoman indices are ready for use!');
});
var user = require('./models/users');
exports.handler = function(event, context) {
user.computeHash(event.password, function(err, salt, hash) {
if (err) {
context.fail('Error in hash: ' + err);
} else {
user.createAndSave("userDetails details sent to the user creation function", function (error, done) {
if (error) {
context.fail(error.toString());
}
context.succeed({
success: true,
data: done
});
});
}
});
};
When you run the above function locally (using node-lambda) to test it gives the same "Segmentation fault(core dumped)" error and when uploaded on Lambda and tested it gives the following error :
{
"errorMessage": "Process exited before completing request"
}
Thanks in advance
This is a known issue related to the MDS scenario you are using (https://issues.couchbase.com/browse/JSCBC-316). This will be resolved in our next release in the beginning of August.

How to replicate from CouchDB to PouchDB?

I've set up a local CouchDB database and I'd like to replicate it to a PouchDB database, using JavaScript in a web page running on localhost.
With the code below I get this error:
Origin http://localhost is not allowed by Access-Control-Allow-Origin.
With http:// removed from REMOTE, I don't get an error, but no docs are shown as replicated.
Looking at IndexedDB databases from Chrome DevTools, I can see the database has been created (but doesn't appear to have documents).
Running in Chrome 29.0.1535.2 canary.
Can I do this locally, or do I need to set up a remote CouchDB database and enable CORS (as per the CouchDB docs)?
var REMOTE = 'http://127.0.0.1:5984/foo';
var LOCAL = 'idb://foo';
Pouch(LOCAL, function(error, pouchdb){
if (error) {
console.log("Error: ", error);
} else {
var db = pouchdb;
Pouch.replicate(REMOTE, LOCAL, function (error, changes) {
if (error) {
console.log('Error: ', error);
}
else {
console.log('Changes: ', changes);
db.allDocs({include_docs: true}, function(error, docs) {
console.log('Rows: ', docs.rows);
});
}});
}
});
You can do it locally, but CORS has to be enabled.
When you remove "http://" from the remote URL, Pouch is going to replicate your DB into a new IndexedDB-backed Pouchdb named "localhost" (or actually "_pouch_localhost" or something like that - it adds a prefix).
Unless you're serving up this page from CouchDB itself (on the same host & port), you will need to enable CORS to get replication to CouchDB working.

Resources