Parse node error to better group them in Sentry - node.js

I'm having an error in Sentry that is not properly grouped. The error throw by node is:
in-app
exception
typeError
valueEPERM: operation not permitted, stat 'C:\Users\administrateur.CLIMA\Documents\Fichiers Outlook'
Which is basically a fs.stat that is failing to read a path. My issue is that the path('C:\Users\administrateur.CLIMA\Documents\Fichiers Outlook' in this example ) is custom to each application / user we have. So I don't really care about this specific value...
What I'd like is split:
valueEPERM: operation not permitted, stat 'C:\Users\administrateur.CLIMA\Documents\Fichiers Outlook'
into at least two blocks:
valueEPERM: operation not permitted, stat
'C:\Users\administrateur.CLIMA\Documents\Fichiers Outlook'
so I can then group in Sentry all valueEPERM errors and not have one different error for each path
But I have no idea how I could do that. Any help and suggestions are welcome.
Thanks!

Related

Check Sum Errors on Checkout

On a virtual host server, I have 1 repository that I can no longer checkout from a remote computer. Checkout works fine if I'm checking out on the server itself. All other repositories on the same virtual host server work without a problem, though none of them have as many files as this one.
When I check out on a remote computer, checkout goes a certain number of files, then starts displaying checksum errors, usually between 2 and 6 at a time. If I delete the checked out folder and try again with a new folder, it will stop after about the same number of files, and the collection of files with checksum errors is different from the previous checkout. Checking out on a different remote computer gets the same random results.
First try:
Error: Checksum mismatch for
Error: 'C:\Users\jkorc\Documents\Projects\VHost\RoyalProvincial\Genealogy\Settle\settle.shtml':
Error:
Error: expected: b450dbef2a3ceb9542a4e22b4b3e50fe
Error: actual: 9454a4eb5afdbc215bffcc619f537fa3
Error: Additional errors:
Error: Checksum mismatch for
Error: 'C:\Users\jkorc\Documents\Projects\VHost\RoyalProvincial\Genealogy\Settle\lndnjv1.shtml':
Error:
Error: expected: e187d974743d0129a6c72413f205458c
Error: actual: 73c0b6aba9a9a42409e2cf7e6d043049
Error: Additional errors:
Error: Checksum mismatch for
Error: 'C:\Users\jkorc\Documents\Projects\VHost\RoyalProvincial\Genealogy\Settle\lndfraser.shtml':
Error:
Error: expected: 151444dc294357ba42a640333c94b6f7
Error: actual: 900ffa360acd460773acbe1759578533
Error: Additional errors:
Error: Checksum mismatch for
Error: 'C:\Users\jkorc\Documents\Projects\VHost\RoyalProvincial\Genealogy\Settle\lndretn1.shtml':
Error:
Error: expected: 6b8791dc78b0af936d6d7e70e11b69ee
Error: actual: dd7af732663345c6401861c89331adea
Error: Additional errors:
Error: Checksum mismatch for
Error: 'C:\Users\jkorc\Documents\Projects\VHost\RoyalProvincial\Genealogy\Settle\lndrfa2.shtml':
Error:
Error: expected: 7451fedd09e1f99adf4c4af2668c4942
Error: actual: 548f947ce54c697957daa4efa4192786
Error: Additional errors:
Error: Checksum mismatch for
Error: 'C:\Users\jkorc\Documents\Projects\VHost\RoyalProvincial\Genealogy\Settle\lndpwar3.shtml':
Error:
Error: expected: f7fba4f2a3a468df6c4948eaa034e119
Error: actual: 0f3c2da13e55a7c0271743e7d145cfca
Error: Additional errors:
Error: Checksum mismatch for
Error: 'C:\Users\jkorc\Documents\Projects\VHost\RoyalProvincial\Genealogy\Settle\lndrhe1.shtml':
Error:
Error: expected: 32cc673b06be5806bc6bc66000d5ec75
Error: actual: 5356175235c395577793e9fa9f0b9bb7
Second try:
Error: Checksum mismatch for
Error: 'C:\Users\jkorc\Documents\Projects\VHost\RoyalProvincial\Genealogy\Settle\Petition_Teder_M_1795.shtml':
Error:
Error: expected: c29978209a203cd254c641bd931739e5
Error: actual: e4042c36b5ff42a01eb890044d64b131
Error: Additional errors:
Error: Checksum mismatch for
Error: 'C:\Users\jkorc\Documents\Projects\VHost\RoyalProvincial\Genealogy\Settle\lndbrng1.shtml':
Error:
Error: expected: 6daeec06c6290557ca6b0de0d551fc43
Error: actual: 598d6bad86117e4cf8f384ed523800aa
I created a new repository of the same files on the same server. Same problem, except I start getting checksum errors after downloading only about a dozen files instead of a few hundred. The server is on Subversion 1.13.0, Tortoise is 1.13.1.
Any ideas what kind of problem I should be looking for?
My assumption would be that a certain folder has a corrupt checksum (for whatever reason).
Fist of all I'd check integrity of the SVN. Some instyructions can be found here:
https://www.darklaunch.com/fix-svnadmin-checksum-mismatch-while-reading-representation.html
There's quite some people experiencing this issue. The easy work around normally mentioned is to recreate the repo:
Copy the content to a temporary directory
Add the folder as new one to SVN
As the new content will get new checksums, those should be ok for all folders afterwards
If you want to do some direct editing, here's a post that corrected the SVN files successfully (use at own risk, make backups!)
https://maymay.net/blog/2008/06/17/fix-subversion-checksum-mismatch-error-by-editing-svnentries-file/
I once had strange behaviour when content on net packages had been capped as well. That was caused by the MTU (Maximum Transfer Unit). SVN seem to struggle here as well, so adjust the setting in your network infrastructure. Here's a link to a similar issue with SVN involved.
https://serverfault.com/questions/392881/tortoise-svn-repo-browser-checkout-over-vpn
As summary: To isolate the issue I would do the following steps:
A) See if the repo's SVN metadata is cause:
log to server
copy the repo as filesystem (without SVN data)
create a new repo with the files
connect svn client from other host. If all goes well the SVN metadata is corrupt
b) Check if it's host/network specific
Do the same as A but from other host (= create the repo from there)
List item
c) Check if network MTU is ok
do check up to which size packets will be transferred ok (will not go into details as this is very OS/infrastructure related, but you'll find lot's of infos)
note hops/VPNs/multiple routers/IPv4-IPv6 can all play into this
Having written all that: Did you crosscheck behaviour with a second SVN client in the first place? Might be the first thing to do.

Node.js: Writing to system files with fs.writeFileSync

I am trying to write to a system file under /sys/kernel/config/usb_gadget with fs.writeFileSync but when writing "" as the contents, the file remains unchanged (with original contents in tact) and results in
Error: EBUSY: resource busy or locked, write
at Object.writeSync (fs.js:581:3)
at Object.writeFileSync (fs.js:1275:26)
at Socket.<anonymous> (/opt/sterling/ip-kvm-interface/app.js:249:6)
at Socket.emit (events.js:210:5)
at /opt/sterling/ip-kvm-interface/node_modules/socket.io/lib/socket.js:528:12
at processTicksAndRejections (internal/process/task_queues.js:75:11) {
errno: -16,
syscall: 'write',
code: 'EBUSY'
}
when writing some other contents. Permissions for the destination write file are 777.
Is fs.writeFileSync incapable of writing to files under sys or am I missing something else?
Using fsuser /sys/kernel/config/usb_gadget/kvm-gadget/UDC returns nothing (even when Node process is running) and lsof | grep /sys/kernel/config/usb_gadget/kvm-gadget/UDC also returns nothing.
Am I going to have to spawn an echo process to get this to work (not preferred but crossed my mind - since not sure how I would convert it to synchronous task)?
https://github.com/nodejs/help/issues/2459
Are there undocumented limitations to fs.writeFileSync that I am unaware of?
Nothing specific to fs.writeFileSync(), you can get the same error
with a plain C program.
/sys/kernel/config/usb_gadget is not a real file, it's a communication
channel with the kernel's usb gadget driver. It's that driver that is
returning the error.
(I could point you to the line of code if you're really interested.
It's drivers/usb/gadget/configfs.c in the kernel source tree in any
case.)

SQLITE_IOEERR: in Meteor

Retrying after error { [Error: SQLITE_IOERR: disk I/O error] errno: 10, code:
'SQLITE_IOERR' }
Retrying after error { [Error: SQLITE_IOERR: disk I/O error] errno: 10, code:
'SQLITE_IOERR' }
/home/kdibbs/.meteor/packages/meteor-tool/.1.4.2.1r0536n++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/lib/node_modules/meteor-promise/promise_server.js:190
throw error;
^
Error: SQLITE_IOERR: disk I/O error
at Error (native)
=> awaited here:
at Promise.await (/home/kdibbs/.meteor/packages/meteor-tool/.1.4.2.1r0536n++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/lib/node_modules/meteor-promise/promise_server.js:39:12)
at Db._execute (/tools/packaging/catalog/catalog-remote.js:355:8)
at /tools/packaging/catalog/catalog-remote.js:144:10
at Db._retry (/tools/packaging/catalog/catalog-remote.js:156:16)
at new Db (/tools/packaging/catalog/catalog-remote.js:143:8)
at RemoteCatalog.initialize (/tools/packaging/catalog/catalog-remote.js:694:15)
at /tools/cli/main.js:815:20
So I ran this meteor program a few days ago, then I made a few more users and a group on my machine..All of which didn't affect my normal user...And now I'm getting this error? Any clues?
Thanks.
Did you try searching for this? Looking at the SQLite reference here https://www.sqlite.org/rescode.html#ioerr
It sounds like one of your disks is giving an error - something like too many files open, or perhaps it's getting full?
(10) SQLITE_IOERR
The SQLITE_IOERR result code says that the operation could not finish
because the operating system reported an I/O error. A full disk drive
will normally give an SQLITE_FULL error rather than an SQLITE_IOERR
error.
There are many different extended result codes for I/O errors that
identify the specific I/O operation that failed.

How to release resource \AppData\Local\Temp\shelljs_60d55c70cdc922162f4b without removing antivirus or reinstalling node?

I'm running Ionic2 to build windows app. but it is giving me following error:
shell.js: internal error
Error: EBUSY: resource busy or locked, open 'C:\Users\edge\AppData\Local\Temp\shelljs_60d55c70cdc922162f4b'
at Error (native)
at Object.fs.openSync (fs.js:640:18)
at Object.fs.writeFileSync (fs.js:1333:33)
at execSync (C:\Users\edge\AppData\Roaming\npm\node_modules\ionic\node_modules\shelljs\src\exec.js:67:57)
at Object._exec (C:\Users\edge\AppData\Roaming\npm\node_modules\ionic\node_modules\shelljs\src\exec.js:179:12)
at Object.exec (C:\Users\edge\AppData\Roaming\npm\node_modules\ionic\node_modules\shelljs\src\common.js:168:23)
at Object.gatherGulpInfo (C:\Users\edge\AppData\Roaming\npm\node_modules\ionic\node_modules\ionic-app-lib\lib\info.js:201:24)
at Object.t (C:\Users\edge\AppData\Roaming\npm\node_modules\ionic\lib\utils\stats.js:148:15)
at Object.run (C:\Users\edge\AppData\Roaming\npm\node_modules\ionic\lib\cli.js:135:16)
at Object.<anonymous> (C:\Users\edge\AppData\Roaming\npm\node_modules\ionic\bin\ionic:13:10)
I sow a same error:
Ionic run android - Internal Error
But I don't want to remove my antivirus and reinstall nodeJS.
But when I restart my system and immediately after I run the command It works(Possibly some process takes time to execute which one access this resource)
Snapshot of process running on my system are as follows:
I faced the same issue , and resolved it by uninstalling bytefence anti malware . it actually use the same resource mentioned above .(I checked in dell laptop)

how to push data into mongodb using sparkfun phant?

I am new to phant and i cannot find a suitable documentation on phant using mongodb. because i have lots of data and it memory overflow occurs. and finally i fell into following error:
HTTP output: { [Error: EMFILE, open 'phant_streams/4d16/83403f7611e5810d57f88174fbef/stream.csv']
errno: -24,
code: 'EMFILE',
path: 'phant_streams/4d16/83403f7611e5810d57f88174fbef/stream.csv' }
events.js:87
throw Error('Uncaught, unspecified "error" event.');
^
Error: Uncaught, unspecified "error" event.
at Error (native)
at Function.emit (events.js:87:13)
at Function.<anonymous> (/usr/lib/node_modules/phant/node_modules/phant-manager-http/index.js:237:12)
at PhantMeta.<anonymous> (/usr/lib/node_modules/phant/node_modules/phant-meta-nedb/lib/phant-meta-nedb.js:243:14)
at callback (/usr/lib/node_modules/phant/node_modules/phant-meta-nedb/node_modules/nedb/lib/executor.js:30:17)
at /usr/lib/node_modules/phant/node_modules/phant-meta-nedb/node_modules/nedb/lib/datastore.js:536:25
at /usr/lib/node_modules/phant/node_modules/phant-meta-nedb/node_modules/nedb/lib/persistence.js:201:12
at fs.js:1077:21
at FSReqWrap.oncomplete (fs.js:95:15)
except this sometimes following error also occurs:
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - process out of memory
thats why i want to use mongodb to prevent this error. i searched about this and finally found sparckfun library for mongodb:
https://github.com/sparkfun/phant-stream-mongodb
i installed this but nothing happened as data still not string into mongo.
so, How will i store phant data into mongodb ?
I had the same problem, specifically trying to deploy my own Phant instance on Heroku (since I wanted to circumvent Sparkfun's 50Mb limit). After some dabbling with versions of the mongodb and mongoose libraries, I successfully forked and modified their repository so that you can either run it locally or directly deploy on heroku (just make sure you provision a MongoLab add-on). Check out my fork here: https://github.com/davidlago/phant
Hope this helps!

Resources