When sending images via axios I found I have to use formdata. I add my images here but when sending the formdata my entire backend just freezes, just says "pending".
Ive been following this
And my attempt so far:
backend:
Apollo:
import { ApolloServer, makeExecutableSchema } from 'apollo-server-fastify';
const schema = makeExecutableSchema({ typeDefs, resolvers });
const apolloServer = new ApolloServer({
schema,
uploads: {
maxFileSize: 10000000,
maxFiles: 5,
},
});
(async function() {
app.register(apolloServer.createHandler({ path: '/api' }));
})();
schema:
scalar DateTime
scalar Upload
input addUser {
Email: String!
Password: String
FirstName: String!
LastName: String!
Age: DateTime!
JobTitle: String!
File: Upload
}
type Mutation {
register(input: addUser!): Boolean
}
resolver:
Mutation: {
register: async (obj, args, context, info) => {
// how to get the formData?
},
}
FrontEnd:
I build the request like this:
const getMutation = (mutate: MutationNames, returParams?: any): any => {
const mutation = {
login: print(
gql`
mutation($email: String!, $password: String!) {
login(email: $email, password: $password) {
token
refreshToken
}
}
`
),
register: print(
gql`
mutation(
$firstName: String!
$email: String!
$lastName: String!
$age: DateTime!
$jobTitle: String!
$file: Upload
) {
register(
input: {
FirstName: $firstName
LastName: $lastName
Email: $email
Age: $age
JobTitle: $jobTitle
File: $file
}
)
}
`
),
}[mutate];
if (!mutation) return {};
return mutation;
};
In this case im using the register mutation.
I have a few hooks on how I handle the data fetching so Im not going to include it since it is alot of code. The data is fetched correctly in the front end and before posting to the backend im putting everything to a formData object:
const submitForm: SubmitForm = (obj: SendObject) => {
const Fdata = new FormData();
Fdata.append('0', fileImp.file);
Fdata.append('operations', JSON.stringify(obj.data));
const map = {
'0': ['variables.file'],
};
Fdata.append('map', JSON.stringify(map));
callAxiosFn(
{
method,
url: 'http://localhost:4000/api',
data: Fdata,
// headers: obj.headers,
},
qlType.toString()
);
};
gets called like this:
const response = await axios({
headers: {
Accept: 'application/json',
'x-token': localStorage.getItem('token'),
'x-refresh-token': localStorage.getItem('refreshToken'),
...(config.headers || {}),
},
...config,
});
config is AxiosRequestConfig
What Im sending:
I dont exactly understand How the formdata will hit my resolver endpoint and for that reason im doing something wrong since the backend returns:
(node:748) UnhandledPromiseRejectionWarning: [object Array] (node:748)
UnhandledPromiseRejectionWarning: Unhandled promise rejection. This
error originated either by throwing inside of an async function
without a catch block, or by rejecting a promise which was not handled
with .catch(). (rejection id: 1) (node:748) [DEP0018]
DeprecationWarning: Unhandled promise rejections are deprecated. In
the future, promise rejections that are not handled will terminate the
Node.js process with a non-zero exit code.
I realize this is alot but Im at the end of my vits here, been at this the entire day. Any help is deeply appreciated.
EDIT:
Since my backend was questioned I thought I would just show that when sending data without appending Formdata like I do above then I get it working:
const submitForm: SubmitForm = (obj: SendObject) => {
callAxiosFn(
{
method,
url: 'http://localhost:4000/api',
data: obj.data,
},
qlType.toString()
);
};
obj.data is:
{query: "mutation ($firstName: String!, $email: String!, $l… Age: $age, JobTitle: $jobTitle, File: $file})↵}↵", variables: {…}}
query: "mutation ($firstName: String!, $email: String!, $lastName: String!, $age: DateTime!, $jobTitle: String!, $file: Upload) {↵ register(input: {FirstName: $firstName, LastName: $lastName, Email: $email, Age: $age, JobTitle: $jobTitle, File: $file})↵}↵"
variables:
age: "1977-04-04"
email: "JhoneDoe#hotmail.com"
file: File {name: "something.jpg", lastModified: 1589557760497, lastModifiedDate: Fri May 15 2020 17:49:20 GMT+0200 (centraleuropeisk sommartid), webkitRelativePath: "", size: 32355, …}
firstName: "Jhon"
jobTitle: "SomethingCool"
lastName: "Doe"
password: "CoolPassword!"123"
__proto__: Object
__proto__: Object
query getting sent in the browser:
Backend reciving the data but the image is not included:
EDIT:
Recently found that my fastify backend might have issues with reading formData.
tried installing
fastify-multipart
but got errors when registering it:
FST_ERR_CTP_ALREADY_PRESENT(contentType) ^ FastifyError
[FST_ERR_CTP_ALREADY_PRESENT]:
After that I tried:
npm uninstall fastify-file-upload
Error remained.
Well, I have not explored this topic yet. But I know that axios with GraphQL does not really work that well. Axios is made mainly for REST API calls. However, I really like and have learned a lot from this channel Ben Awad. The guy is really awesome and explains things clearly and nice. But the most important he is a GraphQL enthusiast and explores and presents various topic about it, as well with React.js, TypeORM & PostgreSQL. Here are some helpful links, from his channel, that might help with your issue:
Upload Files in GraphQL Using Apollo Upload
How to Upload a File to Apollo Server in React
I hope this helps! Please let me know if the content is helpful!
This took some time and usally when you take something for granted it takes time to find the mistake.
For anyone having the same problem please remember that the order you add something MATTERS!
What I did:
const Fdata = new FormData();
Fdata.append('0', fileImp.file); // NOTICE THIS
Fdata.append('operations', JSON.stringify(obj.data));
const map = { // NOTICE THIS
'0': ['variables.file'],
};
Fdata.append('map', JSON.stringify(map));
Problem:
Remember when I said order of appending things matter? Well the case here was that the mapping was added after the file was added.
The correct way:
const Fdata = new FormData();
Fdata.append('operations', JSON.stringify(obj.data));
const map = { // NOTICE THIS
'0': ['variables.file'],
};
Fdata.append('map', JSON.stringify(map));
Fdata.append('0', fileImp.file); // NOTICE THIS
Also note that in my qestion I missed setting the file itself to null in the variables:
variables: {
file: null,
},
This has to be done.
For more info read here
#CodingLittle glad you figured out the answer was related to the multipart form field ordering.
Some things to add (answering as I don't have the 50 reputation required to make a comment on your answer, despite being the graphql-upload author)…
Also note that in my qestion I missed setting the file itself to null in the variables
This is true, and good to get right, although in reality a lot of GraphQL multipart request spec server implementations will simply replace whatever is at the mapped path for a file with the upload scalar value without caring what was there — in theory, you could replace files in variables with asdf instead of null and it would still work. JSON.stringify would have replaced the file instances with something like {}.
A lot of your headaches could have been avoided if the backend responded with a clear 400 status and descriptive error message instead of throwing a gnarly UnhandledPromiseRejectionWarning error. If your graphql-upload dependency was up to date on the backend, you should have seen a descriptive error message when the requests were not conforming to the GraphQL multipart request spec regarding field ordering, as can be seen in the graphql-upload tests:
https://github.com/jaydenseric/graphql-upload/blob/v11.0.0/test/public/processRequest.test.js#L929
Try running npm ls graphql-upload in your backend project to check only one version is installed, and that it’s the latest published to npm (v11 at the time of this answer). Note that if you’re relying on Apollo Server to install it for you, they use a very out of date version (v8).
Related
I am getting this error when running a AWS lambda function to push data into an Elasticsearch instance
I can get it to run if I manually remove the { body } from the node modules, but I can't find why it keeps erroring on that.
my code
client.helpers.bulk({
datasource: docs,
onDocument(doc) {
return {
index: { _index: index , _id: doc.id },
body: doc.body
}
},
onDrop(doc) {
console.log("failed to index ", doc.key);
},
retries: 5,
flushBytes: 1000000,
wait: 10000
})
error
{
"errorType": "Runtime.UnhandledPromiseRejection",
"errorMessage": "TypeError: Cannot destructure property 'body' of 'undefined' as it is undefined.",
"reason": {
"errorType": "TypeError",
"errorMessage": "Cannot destructure property 'body' of 'undefined' as it is undefined.",
"stack": [
"TypeError: Cannot destructure property 'body' of 'undefined' as it is undefined.",
" at /var/task/node_modules/#elastic/elasticsearch/lib/Helpers.js:679:81"
]
},
"promise": {},
"stack": [
"Runtime.UnhandledPromiseRejection: TypeError: Cannot destructure property 'body' of 'undefined' as it is undefined.",
" at process.<anonymous> (/var/runtime/index.js:35:15)",
" at process.emit (events.js:314:20)",
" at process.EventEmitter.emit (domain.js:483:12)",
" at processPromiseRejections (internal/process/promises.js:209:33)",
" at processTicksAndRejections (internal/process/task_queues.js:98:32)"
]
}
I was also getting this error when referencing #opensearch-project/opensearch (which is an elasticsearch-js client fork) and using the client.helpers.bulk helper. That was in conjunction with aws-elasticsearch-connector for implementing AWS SigV4 signed API requests.
The error message was as follows:
TypeError: Cannot destructure property 'body' of 'undefined' as it is undefined.
at node_modules/#opensearch-project/opensearch/lib/Helpers.js:704:93
It was quite annoying and I was not in the mood for implementing my own OpenSearch client and interacting with the APIs directly, so I dig deeper and found the issue.
How can one reproduce the bug?
I created an isolated test to illustrate the problem. Hopefully it's easily reproducible this way.
import { Client } from '#opensearch-project/opensearch';
import * as AWS from 'aws-sdk';
// My fork of https://www.npmjs.com/package/aws-elasticsearch-connector capable of signing requests to AWS OpenSearch
// #opensearch-project/opensearch is not yet capable of signing AWS requests
const createAwsElasticsearchConnector = require('../modules/aws-oss-connector');
const domain =
'PUT_YOUR_DOMAIN_URL_HERE.es.amazonaws.com';
const index = 'YOUR_TEST_INDEX_NAME';
const bootstrapOSSClient = (): Client => {
const ossConnectorConfig = createAwsElasticsearchConnector(AWS.config);
const client = new Client({
...ossConnectorConfig,
node: `https://${domain}`,
});
return client;
};
const main = async (): Promise<void> => {
try {
console.info('Starting processing');
// TEST DEFINITION
const input = [
{ id: '1', name: 'test' },
{ id: '2', name: 'test 2' },
];
const client = bootstrapOSSClient();
const response = await client.helpers.bulk({
datasource: input,
onDocument(doc: any) {
console.info(`Processing document #${doc.id}`);
return {
index: { _index: index, _id: doc.id },
};
},
});
console.info(`Indexed ${response.successful} documents`);
// END TEST DEFINITION
console.info('Finished processing');
} catch (error) {
console.warn(`Error in main(): ${error}`);
}
};
try {
main().then(() => {
console.info('Exited main()');
});
} catch (error) {
console.warn(`Top-level error: ${error}`);
}
and the result was
$ npx ts-node ./.vscode/test.ts
Starting processing
Processing document #1
Processing document #2
(node:39232) UnhandledPromiseRejectionWarning: TypeError: Cannot destructure property 'body' of 'undefined' as it is undefined.
at D:\Development\eSUB\Coronado\git\platform\node_modules\#opensearch-project\opensearch\lib\Helpers.js:704:93
(Use `node --trace-warnings ...` to show where the warning was created)
(node:39232) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:39232) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Indexed 2 documents
Finished processing
Exited main()
Stepping through the code I am able to intercept a single call to node_modules/#opensearch-project/opensearch/lib/Helpers.js:704:93 where
client.bulk() is called
which calls bulkApi() in \opensearch\api\api\bulk.js and returns successfully
await finish() is called in \opensearch\lib\Helpers.js:559
inside \opensearch\lib\Transport.js a call to prepareRequest() is made, ending with return transportReturn
this ends up in request():177 calling
return p.then(onFulfilled, onRejected)
with p being null at that time. That resulted my callback in the AWS Transport class responsible for signing requests to callback to the Helper.js tryBulk() with second parameter undefined, resulting in the Cannot destructure property 'body' error.
What is the expected behavior?
Transport.js implementation of the request should obviously not result in a null p promise issue when callback is passed in the request() call. I logged a bug in the opensearch-js repository.
Workaround
At least for me, this looks to be a problem only when using custom AWS signed requests connector implementation. If your case is similar, a quick workaround involves modifying that implementation Transport class. Here is a quick and dirty hotfix that's specific to aws-elasticsearch-connector.
You need to modify AmazonTransport.js from
class AmazonTransport extends Transport {
request (params, options = {}, callback = undefined) {
...
// Callback support
awaitAwsCredentials(awsConfig)
.then(() => super.request(params, options, callback))
.catch(callback)
}
to
// Callback support
// Removed .then() chain due to a bug https://github.com/opensearch-project/opensearch-js/issues/185
// .then() was calling then (onFulfilled, onRejected) on transportReturn, resulting in a null value exception
awaitAwsCredentials(awsConfig).then();
try {
super.request(params, options, callback);
} catch (err) {
callback(err, { body: null });
}
I'm trying to create a new Cloud Run service from firebase functions using the googleapis client library. The following code:
const auth = new google.auth.GoogleAuth({
projectId,
scopes: ['https://www.googleapis.com/auth/cloud-platform']
});
const authClient = await auth.getClient();
const result = await google.run({
version: 'v1',
auth: authClient
}).namespaces.services.create({
parent: `namespaces/${projectId}`,
requestBody: {
metadata: {
name: 'asdf'
},
spec: {
template: {
spec: {
containers: [
{
image: 'gcr.io/graph-4d1ec/graph#sha256:80c764961657d7e2fe548b3886c4662c55c9b5ac881aad5a74cce2d1f97895b8',
env: [
{ name: 'URL', value: url }
]
}
]
}
},
traffic: [{ percent: 100, latestRevision: true }]
}
}
}, {})
Produces an error:
Error: The request has errors
at Gaxios._request (/srv/node_modules/gaxios/build/src/gaxios.js:85:23)
at <anonymous>
at process._tickDomainCallback (internal/process/next_tick.js:229:7)
No further information is provided as to what is wrong with this request.
What am I doing wrong?
Most notably, the API client library you're using by default points to run.googleapis.com.
However, while using namespaces.services.create, you need a regional api endpoint, such as us-central1-run.googleapis.com. I'm not familiar with Node.js but you need to change the API endpoint from the default to this value.
You are in super luck, I just published a blog post several 5 minutes ago explaining how does gcloud run deploy work under the covers, with details on API calls, how updates are made etc. https://ahmet.im/blog/gcloud-run-deploy/ It has sample Go code linked at the end that you can study. Note that "updating" Cloud Run services has several other intricacies to understand, so make sure to check out the blog post.
Furthermore, to debug the issue you are having, I'm assuming (again I know nothing about Node.js) you might find more info in the result object that storing some error value or http response code or body.
I keep getting the "UnhandledPromiseRejectionWarning: ConfigError: Missing region in config" when trying to make requests to APIs I have set up in Node.js.
I'm new to DynamoDB and after setting up most of my boilerplate code I'm using Postman to test my routes. However I keep getting the same error each time I make a post request. I've checked some solutions on existing threads, namely: Configuring region in Node.js AWS SDK but cannot get it to work.
I am currently developing the app locally and checked the database where the items are being added.
My setup is as follows:
// user_controller.js
const uuid = require('uuid');
const sanitizer = require('validator');
const bcrypt = require('bcryptjs-then');
const AWS = require('aws-sdk');
const config = require('../config/config');
const { signToken, userByEmail, userById } = require('../Helpers/Users');
const isDev = true
Then in my code block I have the following:
// user_controller.js
(...)
if (isDev) {
AWS.config.update(config.aws_local_config);
} else {
AWS.config.update(config.aws_remote_config);
}
const DB = new AWS.DynamoDB.DocumentClient();
const params = {
TableName: config.aws_table_name,
Item: {
userId: await uuid.v1(),
firstName: sanitizer.trim(firstName),
lastName: sanitizer.trim(lastName),
email: sanitizer.normalizeEmail(sanitizer.trim(email)),
password: await bcrypt.hash(password, 8),
level: 'standard',
createdAt: new Date().getTime(),
updatedAt: new Date().getTime(),
},
}
return userByEmail(params.Item.email) // Does the email already exist?
.then(user => { if (user) throw new Error('User with that email exists') })
.then(() => DB.put(params).promise()) // Add the data to the DB
.then(() => userById(params.Item.id)) // Get user data from DB
.then(user => (err, data) => {
console.log("AFTER USER CREATED")
if (err) {
res.send({
success: false,
message: 'Error: Server error'
});
} else {
console.log('data', data);
res.send({
statusCode: 201,
message: 'Success - you are now registered',
data: { token: signToken(params.Item.id), ...user },
});
}
})
(...)
Finally I am importing the config from separate file:
// config.js
module.exports = {
aws_table_name: 'usersTable',
aws_local_config: {
region: 'local',
endpoint: 'http://localhost:8000'
},
aws_remote_config: {}
}
In have already configured the aws-sdk:
AWS Access Key ID [****************foo]:
AWS Secret Access Key [****************bar]:
Default region name [local]:
Default output format [json]:
Here is the output I keep getting:
(node:4568) UnhandledPromiseRejectionWarning: ConfigError: Missing region in config
at Request.VALIDATE_REGION (/Users/BANGBIZ/Programming/techstars/capexmove/SmartLegalContract/node_modules/aws-sdk/lib/event_listeners.js:92:45)
at Request.callListeners (/Users/BANGBIZ/Programming/techstars/capexmove/SmartLegalContract/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
at callNextListener (/Users/BANGBIZ/Programming/techstars/capexmove/SmartLegalContract/node_modules/aws-sdk/lib/sequential_executor.js:96:12)
at /Users/BANGBIZ/Programming/techstars/capexmove/SmartLegalContract/node_modules/aws-sdk/lib/event_listeners.js:86:9
at finish (/Users/BANGBIZ/Programming/techstars/capexmove/SmartLegalContract/node_modules/aws-sdk/lib/config.js:350:7)
at /Users/BANGBIZ/Programming/techstars/capexmove/SmartLegalContract/node_modules/aws-sdk/lib/config.js:368:9
at SharedIniFileCredentials.get (/Users/BANGBIZ/Programming/techstars/capexmove/SmartLegalContract/node_modules/aws-sdk/lib/credentials.js:127:7)
at getAsyncCredentials (/Users/BANGBIZ/Programming/techstars/capexmove/SmartLegalContract/node_modules/aws-sdk/lib/config.js:362:24)
at Config.getCredentials (/Users/BANGBIZ/Programming/techstars/capexmove/SmartLegalContract/node_modules/aws-sdk/lib/config.js:382:9)
at Request.VALIDATE_CREDENTIALS (/Users/BANGBIZ/Programming/techstars/capexmove/SmartLegalContract/node_modules/aws-sdk/lib/event_listeners.js:81:26)
(node:4568) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 4)
(node:4568) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Like I said, I've tried a lot of variations on this but to no avail. Would love some help, thanks.
I dont know if this helps, but I used none instead of local for the region and it seemed to work for me
AWS.config.update({ region: 'none' })
I'm new to nodejs and marklogic, and I'm following a tutorial for a simple app, I have setup and configured my marklogin login credentials,
when I run this sample code by running node sample.js
the output is write document list cannot process response with 404 status
I wonder why I'm encountering this error,
here is the code from the tutorial,
my-connection.js
module.exports = {
connInfo: {
host: '127.0.0.1',
port: 8001,
user: 'user',
password: 'password'
}
};
sample.js
const marklogic = require('marklogic');
const my = require('./my-connection.js');
const db = marklogic.createDatabaseClient(my.connInfo);
const documents = [
{ uri: '/gs/aardvark.json',
content: {
name: 'aardvark',
kind: 'mammal',
desc: 'The aardvark is a medium-sized burrowing, nocturnal mammal.'
}
},
{ uri: '/gs/bluebird.json',
content: {
name: 'bluebird',
kind: 'bird',
desc: 'The bluebird is a medium-sized, mostly insectivorous bird.'
}
},
{ uri: '/gs/cobra.json',
content: {
name: 'cobra',
kind: 'mammal',
desc: 'The cobra is a venomous, hooded snake of the family Elapidae.'
}
},
];
db.documents.write(documents).result(
function(response) {
console.log('Loaded the following documents:');
response.documents.forEach( function(document) {
console.log(' ' + document.uri);
});
},
function(error) {
console.log('error here');
console.log(JSON.stringify(error, null, 2));
}
);
I hope someone can tell me what is wrong with the code,
Thank You!
The MarkLogic NodeJS Client library is meant to run against a so-called MarkLogic REST-api instance. There is typically one running at port 8000, but you can also deploy other ones at different ports by issuing a POST call to :8002/v1/rest-apis, as described here:
http://docs.marklogic.com/REST/POST/v1/rest-apis
Port 8001 however is reserved for the MarkLogic Admin UI, which doesn't understand the REST calls that the NodeJS Client library is trying to invoke, hence the 404 (not found)..
HTH!
new to Sequelize library. From my understanding, 'id' is created automatically by sequelize (and thats what I see in the database). However when I go to 'create' an object it will throw this error:
{ [SequelizeUniqueConstraintError: Validation error]
name: 'SequelizeUniqueConstraintError',
message: 'Validation error',
errors:
[ { message: 'id must be unique',
type: 'unique violation',
path: 'id',
value: '1' } ],
fields: { id: '1' } }
The offending code:
db.Account.create({
email: req.body.email,
password: req.body.password,
allowEmail: req.body.allowEmail,
provider: 'local',
role: 'user'
})
Notice ID is not specified anywhere, neither is it specified in my model definition. Also the query it generates runs fine if I run it in postgres admin:
INSERT INTO "Accounts" ("id","email","role","verifyCode","provider","cheaterScore","isBanned","allowEmail","updatedAt","createdAt") VALUES (DEFAULT,'cat69232#gmail.com','user','','local',0,false,false,'2016-01-27 04:31:54.350 +00:00','2016-01-27 04:31:54.350 +00:00') RETURNING *;
Any ideas to what I could be missing here?
edit:
postgres version: 9.5
stack trace starts here:
/node_modules/sequelize/lib/dialects/postgres/query.js:326
Postgres has a habit of not resetting the next number in the sequence (autoincrement field) after bulk inserts. So if you're doing any pre-filling of the data in an init routine or from a SQL dump file, that's probably your issue.
Check out this post https://dba.stackexchange.com/questions/65662/postgres-how-to-insert-row-with-autoincrement-id