I followed the Mailin docs but haven't been able to make it work.
My domain is: aryan.ml
I'm using Amazon Route 53 for DNS configurations. Here's a screenshot of that:
On my app.js file I'm running the sample code given by the official Mailin docs as well. The code is under "Embedded inside a node application" heading at http://mailin.io/doc
var mailin = require('mailin');
mailin.start({
port: 25,
disableWebhook: true // Disable the webhook posting.
});
/* Access simplesmtp server instance. */
mailin.on('authorizeUser', function(connection, username, password, done) {
if (username == "johnsmith" && password == "mysecret") {
done(null, true);
} else {
done(new Error("Unauthorized!"), false);
}
});
/* Event emitted when a connection with the Mailin smtp server is initiated. */
mailin.on('startMessage', function(connection) {
/* connection = {
from: 'sender#somedomain.com',
to: 'someaddress#yourdomain.com',
id: 't84h5ugf',
authentication: { username: null, authenticated: false, status: 'NORMAL' }
}
}; */
console.log(connection);
});
/* Event emitted after a message was received and parsed. */
mailin.on('message', function(connection, data, content) {
console.log(data);
/* Do something useful with the parsed message here.
* Use parsed message `data` directly or use raw message `content`. */
});
I also tried testing my setup using mxtoolbox but it "Failed to connect" to my MX server. http://mxtoolbox.com/SuperTool.aspx?action=smtp%3amxemail.aryan.ml&run=toolpage
Any help/guidance is much appreciated.
You need to make sure these two things are correct (for AWS):
Turns out it was my Amazon's security group that kept blocking port 25. If you're experiencing similar problems on AWS definitely make sure you allow traffic through port 25 of your server.
For Mailin package navigate to /node_modules/mailin/lib/mailin.js and change the default value of host to 0.0.0.0, it's on line 27. This solves the problem on AWS.
Related
I'd like to restrict the connection to the socket server only to a list of IP, is there any simple way to do that? I couldn't find any so far.
Thanks
This can be done a few ways..
Put some middleware on the connect to read the incoming request IP, check it against an IP list (an array), if it does not work then respond with an error code.
Use an ACL via NGINX, if their IP is not listed, NGINX will block the request
Not really based on IP but, control access via JWT. This would require you to generate the token and send it with each request but it does provide a little more security and you won't have to manage IP ranges. In this architecture your user would request the token, once the token is sent back you make the request to the socket server with the token in the head or as a req param. The socket server then decodes it, if it decodes correctly (meaning it valid) then let them connect, otherwise don't
Following #proxim0 post, I used a middleware :
io.use((socket, next) => {
fs.readFile(__dirname + '/config.json', function(err, file) {
if(err) return next(new Error("Internal error : " + err.message));
const config = JSON.parse(file.toString('utf8'));
if(!config.restrictAccess) return next();
var ip = socket.request.headers['x-forwarded-for'] || socket.request.socket.remoteAddress || null;
if(ip == null || config.allowedAddresses.indexOf(ip) == -1) {
console.log("Access denied from remote IP " + ip);
return next(new Error("Your IP address is not allowed to access this resource."));
}
next();
});
});
With a config.json file with the following format :
{
"restrictAccess": true,
"allowedAddresses": [
"127.0.0.1",
"123.456.789"
]
}
Any comment on improvment is welcome.
Introduction
My POST request works offline i.e. on localhost, but doesn't work when the website is deployed.
I am using nextjs, node, nodemailer, axios and nginx. I have also used fetch instead of axios and it gave me the same issue.
The situation
I have a handleSubmit function, that takes some inputs from a contact form and sends it to my Gmail account:
axios({
method: "POST",
url: "/api/submit",
data: body,
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json'
}
}).then((response) => {
if (response.data.status === 'success') {
alert("Email sent, awesome!");
} else {
alert("Oops, something went wrong. Try again")
}
})
The api/submit.js is as follows:
import nodemailer from "nodemailer"
var transport = {
service: 'gmail',
host: "smtp.gmail.com",
port: 587,
secure: false,
auth: {
user: process.env.USER,
pass: process.env.PASS
}
}
var transporter = nodemailer.createTransport(transport);
transporter.verify((error, success) => {
if (error) {
console.log("There was an error:" + error);
} else {
console.log("Server is ready to take messages")
}
})
export default async (req, res) => {
switch (req.method) {
case "POST":
var name = req.body.name;
var content = name " text "
var mail = {
from: name,
to: "myemail#gmail.com",
text: content
}
transporter.sendMail(mail, (err, data) => {
if (!err) {
res.json({
status: "success"
})
res.end();
}
})
break
case "GET":
res.status(200).json({name:"John Doe"})
break
default:
res.status(405).end()
break
}
}
The code works locally when I run npm run dev, or npm start it posts to http://localhost:3000/api/submit and I receive the email within seconds.
However, when I deploy the website on DigitalOcean and hit the submit button nothing happens for 60 seconds then I get a status code of 504 on my POST request. If I send a GET request to api/submit, it works and I see the JSON for that request, so it's an issue with the POST request specifically.
My Nginx logs it as the following:
2021/02/27 13:59:35 [error] 95396#95396: *3368 upstream timed out (110: Connection timed out) while reading response header from upstream, client: my.ip, server: website.com, request: "POST /api/submit HTTP/1.1", upstream: "http://127.0.0.1:3000/api/submit", host: "website.com", referrer: "https://website.com/contact"
I've been trying to debug it for days now, but I just can't figure it out.
Anyone have any ideas?
EDIT 01.03.2021 The problem is resolved embarrassingly.
Turns out the ports were all open, telnet worked for all mail ports and smtp hosts.
When I hard coded the username/password e.g. for ephemeral mail user: "blah#emphemeral.email", pass: "fakepassword"; the email would send in production.
Turns out my process.env.USER and process.env.PASS were not being replaced with the values of process.env.USER/PASS in .env during npm run build, because I was cloning my GitHub repo, which didn't contain my .env file. After creating my .env file on the server and running npm run build the code worked fine.
Thank you for the replies, sorry for the hassle.
Kindly check that from Digital Ocean where your application is deployed, port is opened or not for following host.
Host:smtp.gmail.com Port: 587
I guess that is not a problem with your code, but with the deploy process. Are you requesting to yor API adress directly or passing to a proxy? If you're using a proxy, maybe the proxy was not redirecting to your application in a right way
I think, it works for you in your local machine, as the localhost is there. But in the server, maybe, it's not the same.
You can check your hostnames in the server by the following command
cat /etc/hosts
If the host name is different, possibly you have the allow the CORS as well. A good place to check is in your browser inspect (in google chrome, right-click -> Inspect)
Then check if you find any message related the access to the server from your frontend.
I suspect I'm not the first to do this but it's rather embarrassing.
First, I confirmed that the ports were all open, telnet worked for all mail ports and smtp hosts.
Then, when I hard coded the username/password e.g. for ephemeral mail user: "blah#emphemeral.email", pass: "fakepassword"; the email would send in production.
Then I realized that my process.env.USER and process.env.PASS variables were not being replaced with the true values of process.env.USER/PASS in .env during npm run build, because I was cloning my GitHub repo, which didn't contain my .env file. Meaning I was trying to login as username = "process.env.USER" and password = "process.env.PASS".
After creating my .env file on the server and running npm run build the code worked fine.
Thank you for the replies and sorry for the hassle.
I keep getting the following error response from node when trying to run a read call to rally:
Error: getaddrinfo ENOTFOUND rally1.rallydev.com rally1.rallydev.com:443
I am using the Rally Node SDK, and node v7. I am on a local machine. It is successfully reaching and logging the 'releaseoid' before the 'try'.
I feel like I am not specifying http (which I was before and now completely commented out the server, letting the SDK default it). But it is continuing to give back that error. I could not find (or possibly understand) other general Node guidance that may address this situation. I am not clear where port 443 is coming from as I am not specifying it. Is the SDK adding it?
If I specify the server address without http:
server: 'rally1.rallydev.com',
I still get an error, but this time:
Error: Invalid URI "rally1.rallydev.com/slm/webservice/v2.0null
I am new to Node and not sure if I am having a problem with Node or the Rally Node SDK.
Code below.
var rally = require('rally');
var rallyApi = rally({
apiKey: 'xx',
apiVersion: 'v2.0',
//server: 'rally1.rallydev.com',
requestOptions: {
headers: {
'X-RallyIntegrationName' : 'Gather release information after webhook',
'X-RallyIntegrationVendor' : 'XX',
'X-RallyIntegrationVersion' : '0.9'
}
}
});
// exports.getReleaseDetails = function(releaseoid, result) {
// console.log('get release details being successfully called');
//
//
//
// }
module.exports = {
getReleaseDetails: async(releaseoid) => {
console.log(releaseoid);
try {
let res = await
rallyApi.get({
ref: 'release/' + releaseoid,
fetch: [
'Name',
'Notes',
'Release Date'
]
//requestOptions: {}
});
res = await res;
console.log(res);
} catch(e) {
console.error('something went wrong');
console.log(e);
}
}
}
That mostly looks right. I haven't tried to use async/await with the node toolkit yet- it would be interesting to see if that works. It should, since get and all the other methods return promises in addition to handling standard node callback syntax.
But anyway, I think the issue you're having is a missing leading / on your ref.
rallyApi.get({
ref: '/release/' + releaseOid
});
Give that a shot?
As for the network errors, is it possible that you're behind a proxy on your network? You're right though, https://rally1.rallydev.com is the default server so you shouldn't have to specify it. FYI, 443 is just the default port for https traffic.
I am new to nodejs and working on a proof of concept just for fun.
Background:
I have a cloud directory of user information (like username, password and other info). This cloud directory can be used to authenticate a user only via restful API (i.e. no direct connectivity using LDAP or JDBC etc.).
Aim:
To build an LDAP interface for this cloud directory. To start with I am interested only in authentication (LDAP bind).
Intended Flow:
LDAPClient initiates a standard LDAP simple BIND request:
Host: host where my nodejs app will run
Port: 1389 (port that my nodejs app will be bound to)
Username: a user from cloud directory
Password: user's password
This request is received by my NodeJS app (I am using ldapjs module).
// process ldap bind operation
myLdapServer.bind(searchBase, function (bindReq, bindRes, next) {
// bind creds
var userDn = req.dn.toString();
var userPw = req.credentials;
console.log('bind DN: ' + req.dn.toString());
...
...
}
Within the above callback, I must use http.request to fire a restful API (POST) to the cloud directory with the details I received from the BIND request (i.e. username, password).
If restful api response status is 200 (auth success), then I must return success to the LDAPClient, else I must return invalid credentials error.
Success:
bindRes.end();
return next();
Failure:
Console.log("returning error");
return next(new ldap.InvalidCredentialsError());
Questions:
Is this possible using NodeJS? Asking because of the nesting involved as evident above (calling of REST API from within a callback). Also since this is an authentication operation, this is meant to be a blocking operation(?)
Thanks,
Jatin
UPDATE:
Thanks Klvs, my solution is more or less like the one you posted. Please have a look at the snippet below:
// do the POST call from within callback
var postRequest = https.request(postOptions, function(postResponse) {
console.log("statusCode: ", postResponse.statusCode);
if(postResponse.statusCode!=200) {
console.log("cloud authentication failed: "+postResponse.statusCode);
return next(ldapModule.InvalidCredentialsError());
} else {
postResponse.on('data', function(d) {
console.info('POST result:\n');
process.stdout.write(d);
console.info('\n\nPOST completed');
});
res.end();
return next();
}
});
// write json data
postRequest.write(postData);
postRequest.end();
postRequest.on('error', function(e) {
console.error("postRequest error occured: "+e);
});
Successful authentication works fine, however, failed authentication does not send any response back to the LDAPClient at all. My client just times out instead of showing authentication failure error. I do see the "cloud authentication failed: " log message on the Node console, which means the below statement is not doing what I intend it do:
return next(ldapModule.InvalidCredentialsError());
Note that the above statement works when I remove the rest call etc, and just return the error back to the client.
Am I missing something?
Thanks,
Jatin
Of course it's possible in nodejs. If I understand you want to make an authenticating request to a server and have it either fail or succeed.
const request = require('request')
// process ldap bind operation
myLdapServer.bind(searchBase, function (bindReq, bindRes, next) {
// bind creds
var userDn = req.dn.toString();
var userPw = req.credentials;
console.log('bind DN: ' + req.dn.toString());
request.post({username: userDn, password: userPw}, (err, res, body)=>{
if(err) {
console.log("returning error");
next(new ldap.InvalidCredentialsError());
} else {
bindRes.end();
next();
}
})
}
Is that what you're looking for? If so, you just need to get accustom to callbacks.
I'm working with elasticsearch-js (NodeJS) and everything works just fine as long as long as ElasticSearch is running. However, I'd like to know that my connection is alive before trying to invoke one of the client's methods. I'm doing things in a bit of synchronous fashion, but only for the purpose of performance testing (e.g., check that I have an empty index to work in, ingest some data, query the data). Looking at a snippet like this :
var elasticClient = new elasticsearch.Client({
host: ((options.host || 'localhost') + ':' + (options.port || '9200'))
});
// Note, I already have promise handling implemented, omitting it for brevity though
var promise = elasticClient.indices.delete({index: "_all"});
/// ...
Is there some mechanism to send in on the client config to fail fast, or some test I can perform on the client to make sure it's open before invoking delete?
Update: 2015-05-22
I'm not sure if this is correct, but perhaps attempting to get client stats is reasonable?
var getStats = elasticClient.nodes.stats();
getStats.then(function(o){
console.log(o);
})
.catch(function(e){
console.log(e);
throw e;
});
Via node-debug, I am seeing the promise rejected when ElasticSearch is down / inaccessible with: "Error: No Living connections". When it does connect, o in my then handler seems to have details about connection state. Would this approach be correct or is there a preferred way to check connection viability?
Getting stats can be a heavy call to simply ensure your client is connected. You should use ping, see 2nd example https://github.com/elastic/elasticsearch-js#examples
We are using ping too, after instantiating elasticsearch-js client connection on start up.
// example from above link
var elasticsearch = require('elasticsearch');
var client = new elasticsearch.Client({
host: 'localhost:9200',
log: 'trace'
});
client.ping({
// ping usually has a 3000ms timeout
requestTimeout: Infinity,
// undocumented params are appended to the query string
hello: "elasticsearch!"
}, function (error) {
if (error) {
console.trace('elasticsearch cluster is down!');
} else {
console.log('All is well');
}
});