Got some question related to uploading image and sharing it.
I found related question with answer, but it does now work. Google cloud responds with this error
NodeJS gcloud - Upload to google storage with public-read property/custom cache-expire
{ [Error: Required]
errors: [ { domain: 'global', reason: 'required', message: 'Required' } ],
code: 400,
message: 'Required',
response: undefined }
Ideally i want to upload a file, and then access it though public domain. I don't want to have any streaming solutions, as in like opening file through api.
// bucket is defined, uploading is fine
var file = bucket.file(id);
stream.pipe(file.createWriteStream());
//Giving permissions
bucket.acl.default.add({
scope: "allUsers",
role: gcloud.storage.acl.READER_ROLE
}, function(err) {
console.log(err);
// i am getting an error there
})
Thanks!
Related
I was having a problem that I think should be posted on the internet. I may not know the internal issue, but I think I have a solution. Anyway the problem:
I'm hosting an ElasticSearch Service on AWS, and I'm trying to access that service locally and or through my ec2 service hosted on AWS.
But when I try to locally I get this error: Request Timeout after 30000ms
When I try it on my ec2 I get this error: AWS Credentials error: Could not load credentials from any providers
Here was how I set up the credentials and made the query:
const AWS = require('aws-sdk');
const connectionClass = require('http-aws-es');
const elasticsearch = require('elasticsearch');
try {
var elasticClient = new elasticsearch.Client({
host: "https://some-elastic.us-east-1.es.amazonaws.com/",
log: 'error',
connectionClass: connectionClass,
amazonES: {
region: 'us-east-1',
credentials: new AWS.Credentials('id', 'key')
}
});
elasticClient.indices.delete({
index: 'foo',
}).then(function (resp) {
console.log("Successful query!");
console.log(JSON.stringify(resp, null, 4));
}, function (err) {
console.trace(err.message);
});
} catch (err) {
console.log(err);
} finally {
}
So as stated I kept getting this error. I tried many other variations to pass the credentials.
My vague understanding of the problem is that the credentials being set in the amazonES object are being ignored, or that the region isn't being passed along with the credentials. So AWS doesn't know where to search for the credentials.
Anyway here is the solution:
AWS.config.update({
secretAccessKey: 'key',
accessKeyId: 'id',
region: 'your region ex. us-east-1'
});
var elasticClient = new elasticsearch.Client({
host: "https://some-elastic.us-east-1.es.amazonaws.com/",
log: 'error',
connectionClass: connectionClass,
amazonES: {
credentials: new AWS.EnvironmentCredentials('AWS'),
}
});
It's a bit of a buggy situation. I couldn't find this solution anywhere online and I hope it helps someone out who runs into the same errors in the future.
I'm trying to create a new Cloud Run service from firebase functions using the googleapis client library. The following code:
const auth = new google.auth.GoogleAuth({
projectId,
scopes: ['https://www.googleapis.com/auth/cloud-platform']
});
const authClient = await auth.getClient();
const result = await google.run({
version: 'v1',
auth: authClient
}).namespaces.services.create({
parent: `namespaces/${projectId}`,
requestBody: {
metadata: {
name: 'asdf'
},
spec: {
template: {
spec: {
containers: [
{
image: 'gcr.io/graph-4d1ec/graph#sha256:80c764961657d7e2fe548b3886c4662c55c9b5ac881aad5a74cce2d1f97895b8',
env: [
{ name: 'URL', value: url }
]
}
]
}
},
traffic: [{ percent: 100, latestRevision: true }]
}
}
}, {})
Produces an error:
Error: The request has errors
at Gaxios._request (/srv/node_modules/gaxios/build/src/gaxios.js:85:23)
at <anonymous>
at process._tickDomainCallback (internal/process/next_tick.js:229:7)
No further information is provided as to what is wrong with this request.
What am I doing wrong?
Most notably, the API client library you're using by default points to run.googleapis.com.
However, while using namespaces.services.create, you need a regional api endpoint, such as us-central1-run.googleapis.com. I'm not familiar with Node.js but you need to change the API endpoint from the default to this value.
You are in super luck, I just published a blog post several 5 minutes ago explaining how does gcloud run deploy work under the covers, with details on API calls, how updates are made etc. https://ahmet.im/blog/gcloud-run-deploy/ It has sample Go code linked at the end that you can study. Note that "updating" Cloud Run services has several other intricacies to understand, so make sure to check out the blog post.
Furthermore, to debug the issue you are having, I'm assuming (again I know nothing about Node.js) you might find more info in the result object that storing some error value or http response code or body.
I want to send a verification Email to new people signing up via AWS SES in Node.js:
var params = {
Destination: {
ToAddresses: [
'...',
]
},
Message: {
Body: {
Html: {
Data: '...',
Charset: 'utf-8'
},
Text: {
Data: '...',
Charset: 'utf-8'
}
},
Subject: {
Data: '...',
Charset: 'utf-8'
}
},
Source: '...',
ReturnPath: '...',
};
const ses = new AWS.SES({
my_accessKeyId,
my_secretAccessKey,
my_region,
})
ses.sendEmail(params, function (err, data) {
// ...
});
Unfortunately, nothing happens and after a minute or so I get the following error:
{ Error: connect ETIMEDOUT 35.157.44.176:443
at Object._errnoException (util.js:992:11)
at _exceptionWithHostPort (util.js:1014:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1186:14)
message: 'connect ETIMEDOUT 35.157.44.176:443',
code: 'NetworkingError',
errno: 'ETIMEDOUT',
syscall: 'connect',
address: '35.157.44.176',
port: 443,
region: 'eu-central-1',
hostname: 'email.eu-central-1.amazonaws.com',
retryable: true,
time: 2018-10-18T16:52:00.803Z }
I am not sure if the error results from a mistake in my code, as I am currently only using the sandbox environment. But I verified the sending and receiving email, which should work fine even in sandbox mode.
P.S. I am not quite sure how to properly test my application when being in sandbox mode, as I am not allowed to send to unverified emails. Is it possible to request production access without a proper application running?
You need to change region from eu-central-1 to region where endpoint exist (eg. eu-west-1) in order to send emails out
https://docs.aws.amazon.com/ses/latest/DeveloperGuide/regions.html
It's not obvious, so we'll need to do some troubleshooting.
First, lets see if you can connect from your local machine to the SES API using the AWS CLI. Make sure you have set up the aws credentials using aws configure and try:
aws ses list-identities
If this works, you should see a list of validated email addresses.
If not, you will see an error (permissions maybe), or a timeout suggests a network connectivity issue.
Note on credentials: Don't include your credentials in your code, either configure a .credentials file, which happens when you used the aws configure above, load it from a shared json file or use environment variables (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY).
Next, try to do the same in code:
// Load the AWS SDK for Node.js
var AWS = require('aws-sdk');
// Set the region
AWS.config.update({region: 'eu-central-1'});
// Create SES Client
const ses = new AWS.SES({apiVersion: '2010-12-01'})
// Create params
var params = {
IdentityType: "EmailAddress",
MaxItems: 123,
NextToken: ""
};
ses.listIdentities(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
After you set up your credentials as mentioned above, use this in your Send Email code to create the ses client:
// Load the AWS SDK for Node.js
var AWS = require('aws-sdk');
// Set the region
AWS.config.update({region: myregion});
// Create SES Client
const ses = new AWS.SES({apiVersion: '2010-12-01'})
Other things to check:
Make sure that all the email addresses are verified ("From", "Source", "Sender", or "Return-Path")
Make sure that you have the correct SES Access Keys
If using EC2, make sure that the security groups allow access to the AWS API.
I hope it's working by now, if not I can only say go to this page and make sure you haven't missed anything obvious.
You need to make sure your user your using to send the email
my_accessKeyId,
my_secretAccessKey,
has the correct IAM permissions (role attached to it) to send the email with ses. Test with the role SES full access I believe.
I have migrated data from the parse website to Azure's version of parse and notice some components were missing like an email adapter. So I follow the instructions from here https://www.npmjs.com/package/parse-server-postmark-adapter. I'm able to receive email to change my password.
But I get this error when I click on the link to change my password,
"level":"error","message":"Uncaught internal server error. [Error: Can't set headers after they are sent.]
Can anyone explain why I'm getting this message? Also, I put the code to configure postmark in my config.js file.
Edit:
var PostmarkAdapter = require('parse-server-postmark-adapter');
module.exports = {
server: {
appName: 'myapp',
publicServerURL: 'http://myapp.azurewebsites.net/parse',
verifyUserEmails: true, // Enable email verification
emailAdapter: PostmarkAdapter({
apiKey: 'api-key-0000',
fromAddress: 'someemail#email.com',
})
},
dashboard: {},
storage: {},
push: {}
}
Disclaimer: This is related to another question I asked here. I was advised to ask a new question rather than to update that one, I hope that this is correct. If not please let me know and ignor this question.
I have been trying to use a Neo4j in Microsoft Azure, using this tutorial. I created a VM running Linux and neo4j. I know his works fine because I have been able to access the database via the web admin portal, where I can create and delete entries. However the problem comes when I try to use node.js to insert elements.
Here is the code for the script:
function insert(item, user, request) {
//comment to trigger .js creation
var neo4j = require('neo4j');
var db = new neo4j.GraphDatabase('http://<username>:<password>#neo4jmobile.cloudapp.net:7474');
var node = db.createNode({ name: item.name });
node.save(function (err, node) {
if (err) {
console.error('Error saving new node to database:', err);
}
else {
console.log('Node saved to database with id:', node.id);
}
});
request.execute();
}
I am getting this error message:
Error saving new node to database: { [Error: connect ETIMEDOUT]
stack: [Getter/Setter],
code: 'ETIMEDOUT',
errno: 'ETIMEDOUT',
syscall: 'connect',
__frame:
{ name: 'GraphDatabase_prototype__getRoot__1',
line: 76,
file: '\\\\10.211.156.195\\volume-0-default\\bf02c8bd8f7589d46ba1\\4906fa4587734dd087df8e641513f602\\site\\wwwroot\\App_Data\\config\\scripts\\node_modules\\neo4j\\lib\\GraphDatabase.js',
prev:
{ name: 'GraphDatabase_prototype_getServices__2',
line: 99,
file: '\\\\10.211.156.195\\volume-0-default\\bf02c8bd8f7589d46ba1\\4906fa4587734dd087df8e641513f602\\site\\wwwroot\\App_Data\\config\\scripts\\node_modules\\neo4j\\lib\\GraphDatabase.js',
prev: [Object],
active: false,
offset: 5,
col: 12 },
active: false,
offset: 5,
col: 12 },
rawStack: [Getter] }
Any help would be appreciated!
Your URL is still wrong: http://<username>:<password>#http://neo4jmobile.cloudapp.net:7474 should be http://<username>:<password>#neo4jmobile.cloudapp.net:7474
In the referenced tutorial (which is quite good btw) he says:
var db = new neo4j.GraphDatabase('http://<username>:<password>#<your neo url>.cloudapp.net:7474');
Where refers to the hostname, i.e.: neo4jmobile