I'm currently trying to implement an ethereum Node Connection to my Typescript/ Node Project.
I'm connection to the "Infura" node server where i need to sign my transaction locally.
Well, anyway. I'm signing my transaction using the npm package "ethereumjs-tx" and everything looks great.
When i'm using "sendRawTransaction" from web3 my response is an tx-id which means my transaction should be allready in the Blockchain. Well... it isn't
My sign Transaction Function is below.
private signTransactionLocally(amountInWei: number, to: string, privateKey: string = <PRIVATE_KEY>, wallet: string = <MY_WALLET>) {
const pKeyBuffer = Buffer.from(privateKey, "hex");
const txParams = {
nonce: this.getNonce(true,wallet),
//gas: this.getGasPrice(true),
gasLimit: this.getGasLimit2(true),
to: to,
value: amountInWei,
data: '0x000000000000000000000000000000000000000000000000000000000000000000000000',
chainId: "0x1"
};
// console.log(JSON.stringify(txParams));
const tx = new this.ethereumTx(txParams);
tx.sign(pKeyBuffer);
return tx.serialize().toString("hex");
}
Used Functions in "signTransactionLocally" :
private getGasLimit2(hex: boolean = false) {
const latestGasLimit = this.web3.eth.getBlock("latest").gasLimit;
return hex ? this.toHex(latestGasLimit) : latestGasLimit;
}
private getNonce(hex:boolean = false, wallet: string = "0x60a22659E0939a061a7C9288265357f5d26Cf98a") {
return hex ? this.toHex(this.eth().getTransactionCount(wallet)) : this.eth().getTransactionCount(wallet);
}
Running my code looks like following:
this.dumpInformations();
const signedTransaction = this.signTransactionLocally(this.toHex((this.getMaxAmountToSend(false, "0x60a22659E0939a061a7C9288265357f5d26Cf98a") / 3 )), "0x38bc48f1d19fdf7c8094a4e40334250ce1c1dc66" );
console.log(signedTransaction);
this.web3.eth.sendRawTransaction("0x" + signedTransaction, function(err: any, res: any) {
if (err)
console.log(err);
else
console.log("transaction Done=>" + res);
});
since sendRawTransaction results in console log:
[Node] transaction Done=>0xc1520ebfe0a225e6971e81953221c60ac1bfcd528e2cc17080b3f9b357003e34
everything should be allright.
Has anybody had the same problem?
i Hope that someone could help me. Have a nice day!
After dealing with these issues countless times; i'm pretty sure you are sending a nonce too high.
In these cases, the node will still return you a transaction hash, but your transaction will remain in the nodes queue and not enter the memory pool or be propagated to other nodes, UNTIL, you fill in the nonce gaps.
You could try the following :
Use getTransactionCount(address, 'pending') - to include the txs that are int nodes queue & memory pool . But this method unreliable and won't deal with concurrent requests as the node needs time to assess the correct amount at any given time.
Keep your own counter, without relying on the node (unless you detect some serious errors).
For more serious projects, persist your counter / per address at the level of the database with locks to handle concurrency, making sure you give out correct nonces to each request.
Cheers
Related
If I publish 50K messages using Promise.all like below:
const pubsub = new PubSub({ projectId: PUBSUB_PROJECT_ID });
const topic = pubsub.topic(topicName, {
batching: {
maxMessages: 1000,
maxMilliseconds: 100,
},
});
const n = 50 * 1000;
const dataBufs: Buffer[] = [];
for (let i = 0; i < n; i++) {
const data = `message payload ${i}`;
const dataBuffer = Buffer.from(data);
dataBufs.push(dataBuffer);
}
const tasks = dataBufs.map((d, idx) =>
topic.publish(d).then((messageId) => {
console.log(`[${new Date().toISOString()}] Message ${messageId} published. index: ${idx}`);
})
);
// publish messages concurrencly
await Promise.all(tasks);
// send response to front-end
res.json(data);
I will hit this issue: pubsub-emulator throw error and publisher throw "Retry total timeout exceeded before any response was received" when publish 50k messages
If I use for loop and async/await. The issue is gone.
const n = 50 * 1000;
for (let i = 0; i < n; i++) {
const data = `message payload ${i}`;
const dataBuffer = Buffer.from(data);
const messageId = await topic.publish(dataBuffer)
console.log(`[${new Date().toISOString()}] Message ${messageId} published. index: ${i}`)
}
// some logic ...
// send response to front-end
res.json(data);
But it will block the execution of subsequent logic because of async/await until all messages have been published. It takes a long time to post 50k messages.
Any suggestions about how to publish a huge amount of messages(about 50k) without blocking the execution of subsequent logic? Do I need to use child_process or some queue like bull to publish the huge amount of messages in the background without blocking request/response workflow of the API? This means I need to respond to the front-end as soon as possible, the 50k messages should be the background tasks.
It seems there is a memory queue inside #google/pubsub library. I am not sure if I should use another queue like bull again.
The time it will take to publish large amounts of data depends on a lot of factors:
Message size. The larger the messages, the longer it takes to send them.
Network capacity (both of the connection between wherever the publisher is running and Google Cloud and, if relevant, of the virtual machine itself). This puts an upper bound on the amount of data that can be transmitted. It is not atypical to see smaller virtual machines with limits in the 40MB/s range. Note that if you are testing via Wifi, the limits could be even lower than this.
Number of threads and number of CPU cores. When having to run a lot of asynchronous callbacks, the ability to schedule them to run can be limited by the parallel capacity of the machine or runtime environment.
Typically, it is not good to try to send 50,000 publishes simultaneously from one instance of a publisher. It is likely that the above factors will cause the client to get overloaded and result in deadline exceeded errors. The best way to prevent this is to limit the number of messages that can be outstanding for publish at one time. Some of the libraries like Java support this natively. The Node.js library does not yet support this feature, but likely will in the future.
In the meantime, you'd want to keep a counter of the number of messages outstanding and limit it to whatever the client seems to be able to handle. Start with 1000 and work up or down from there based on the results. A semaphore would be a pretty standard way to achieve this behavior. In your case the code would look something like this:
var sem = require('semaphore')(1000);
var publishes = []
const tasks = dataBufs.map((d, idx) =>
sem.take(function() => {
publishes.push(topic.publish(d).then((messageId) => {
console.log(`[${new Date().toISOString()}] Message ${messageId} published. index: ${idx}`);
sem.leave();
}));
})
);
// Await the start of publishing all messages
await Promise.all(tasks);
// Await the actual publishes
await Promise.all(publishes);
I'm having memory leak issue in a node application. The application is subscribed to a topic in redis and on receiving a message pops a message from a list using brpop. There are a number instances of this application running in production so one instance might be blocking for a message in the redis list. Here is the code snippet which consumes a message from redis:
private doWork(): void {
this.storage.subscribe("newRoom", (message: [any, any]) => {
const [msg] = message;
if (msg === "room") {
return new Promise( async (resolve, reject) => {
process.nextTick( async () => {
const roomIdData = await this.storage.brpop("newRoomList"); // a promisified version of brpop with timeout of 5s
if (roomIdData) {
const roomId = roomIdData[1];
this.createRoom(roomId);
}
});
resolve();
});
}
});
}
I've tried debugging the memory leaks using chrome debugger and I've observed too many closure objects getting created. I suspect that it's due to this code as I'm able to see the redis client object name in the closure object but I'm not able to figure out how I might be able to fix it. I added process.nextTick but it didn't help. I'm using node-redis client for connecting to redis. Attaching an object retainer map screenshot from the chrome debugger tool.
P.S. blk is the redis client object name used exclusively for blocking commands i.e. brpop.
Edit: Replaced brpop with rpop and we're seeing a significant drop in memory growth rate but now the distribution of messages between the workers has gone skewed.
I have been asked to use BlockChain for a web app I am building and I did not hear about it before, after searching some information about it, now I understand what it is about. So, basically it encrypts some data in blocks and in this way, the data is safe.
For instance, I have this code I took from internet:
const SHA256 = require('crypto-js/sha256')
class Block {
constructor(timestamp, data) {
this.index = 0;
this.timestamp = timestamp;
this.data = data;
this.previousHash = "0";
this.hash = this.calculateHash();
this.nonce = 0;
}
calculateHash() {
return SHA256(this.index + this.previousHash + this.timestamp + this.data + this.nonce).toString();
}
mineBlock(difficulty) {
}
}
class Blockchain{
constructor() {
this.chain = [this.createGenesis()];
}
createGenesis() {
return new Block(0, "01/01/2017", "Genesis block", "0")
}
latestBlock() {
return this.chain[this.chain.length - 1]
}
addBlock(newBlock){
newBlock.previousHash = this.latestBlock().hash;
newBlock.hash = newBlock.calculateHash();
this.chain.push(newBlock);
}
checkValid() {
for(let i = 1; i < this.chain.length; i++) {
const currentBlock = this.chain[i];
const previousBlock = this.chain[i - 1];
if (currentBlock.hash !== currentBlock.calculateHash()) {
return false;
}
if (currentBlock.previousHash !== previousBlock.hash) {
return false;
}
}
return true;
}
}
let jsChain = new Blockchain();
jsChain.addBlock(new Block("12/25/2017", {amount: 5}));
jsChain.addBlock(new Block("12/26/2017", {amount: 10}));
And this is the result:
As you can see, we have encrypted "Genesis block" in blocks (if I am not wrong).
Ok so, what if I want to decrypt the data in order to get "Genesis block" back? Will this be possible?
I am new at this, so I find it a bit confusing... I understand what it is about but I do not know how I would implement it to my web app. My web app basically gets some information from the database, shows that information to the final user and the final user sends an email to a customer who will have to pay through a link.
Blockchains are not about encrypting data to keep it "safe"; they are about creating an auditable ledger of changes to some state that needs to be agreed by multiple parties. In the case of bitcoin, they represent the distribution of currency across different wallets, and record transactions which change that distribution.
Blockchains are a very specialized technology, and while there are a few situations where they are useful, there are also a lot people searching for problems to fit the solution. You're better off analysing your actual problems first, and then looking around for technologies that solve those problems.
In this case, your number one priority should be general security - if someone manages to run an UPDATE on your internal database, something has already gone very wrong. As part of "defence in depth", you might also want an audit trail of changes that have been made. That might mean picking a database technology that forms an append-only ledger, but unlike a traditional blockchain, you probably don't need that to be based on distributed consensus. And you might find that actually having a separately secured audit log that you can cross-check if anything suspicious happens is enough for the scenarios you're expecting.
Finally, if you do decide that a blockchain-based ledger solves a real problem you have, look for an existing implementation you can take advantage of, for the same reason you wouldn't try to write MySQL from scratch.
I have an application which checks for new entries in DB2 every 15 seconds on the iSeries using IBM's idb-connector. I have async functions which return the result of the query to socket.io which emits an event with the data included to the front end. I've narrowed down the memory leak to the async functions. I've read multiple articles on common memory leak causes and how to diagnose them.
MDN: memory management
Rising Stack: garbage collection explained
Marmelab: Finding And Fixing Node.js Memory Leaks: A Practical Guide
But I'm still not seeing where the problem is. Also, I'm unable to get permission to install node-gyp on the system which means most memory management tools are off limits as memwatch, heapdump and the like need node-gyp to install. Here's an example of what the functions basic structure is.
const { dbconn, dbstmt } = require('idb-connector');// require idb-connector
async function queryDB() {
const sSql = `SELECT * FROM LIBNAME.TABLE LIMIT 500`;
// create new promise
let promise = new Promise ( function(resolve, reject) {
// create new connection
const connection = new dbconn();
connection.conn("*LOCAL");
const statement = new dbstmt(connection);
statement.exec(sSql, (rows, err) => {
if (err) {
throw err;
}
let ticks = rows;
statement.close();
connection.disconn();
connection.close();
resolve(ticks.length);// resolve promise with varying data
})
});
let result = await promise;// await promise
return result;
};
async function getNewData() {
const data = await queryDB();// get new data
io.emit('newData', data)// push to front end
setTimeout(getNewData, 2000);// check again in 2 seconds
};
Any ideas on where the leak is? Am i using async/await incorrectly? Or else am i creating/destroying DB connections improperly? Any help on figuring out why this code is leaky would be much appreciated!!
Edit: Forgot to mention that i have limited control on the backend processes as they are handled by another team. I'm only retrieving the data they populate the DB with and adding it to a web page.
Edit 2: I think I've narrowed it down to the DB connections not being cleaned up properly. But, as far as i can tell I've followed the instructions suggested on their github repo.
I don't know the answer to your specific question, but instead of issuing a query every 15 seconds, I might go about this in a different way. Reason being that I don't generally like fishing expeditions when the environment can tell me an event occurred.
So in that vein, you might want to try a database trigger that loads the key to the row into a data queue on add, or even change or delete if necessary. Then you can just put in an async call to wait for a record on the data queue. This is more real time, and the event handler is only called when a record shows up. The handler can get the specific record from the database since you know it's key. Data queues are much faster than database IO, and place little overhead on the trigger.
I see a couple of potential advantages with this method:
You aren't issuing dozens of queries that may or may not return data.
The event would fire the instant a record is added to the table, rather than 15 seconds later.
You don't have to code for the possibility of one or more new records, it will always be 1, the one mentioned in the data queue.
yes you have to close connection.
Don't make const data. you don't need promise by default statement.exec is async and handles it via return result;
keep setTimeout(getNewData, 2000);// check again in 2 seconds
line outside getNewData otherwise it becomes recursive infinite loop.
Sample code
const {dbconn, dbstmt} = require('idb-connector');
const sql = 'SELECT * FROM QIWS.QCUSTCDT';
const connection = new dbconn(); // Create a connection object.
connection.conn('*LOCAL'); // Connect to a database.
const statement = new dbstmt(dbconn); // Create a statement object of the connection.
statement.exec(sql, (result, error) => {
if (error) {
throw error;
}
console.log(`Result Set: ${JSON.stringify(result)}`);
statement.close(); // Clean up the statement object.
connection.disconn(); // Disconnect from the database.
connection.close(); // Clean up the connection object.
return result;
});
*async function getNewData() {
const data = await queryDB();// get new data
io.emit('newData', data)// push to front end
setTimeout(getNewData, 2000);// check again in 2 seconds
};*
change to
**async function getNewData() {
const data = await queryDB();// get new data
io.emit('newData', data)// push to front end
};
setTimeout(getNewData, 2000);// check again in 2 seconds**
First thing to notice is possible open database connection in case of an error.
if (err) {
throw err;
}
Also in case of success connection.disconn(); and connection.close(); return boolean values that tell is operation successful (according to documentation)
Always possible scenario is to pile up connection objects in 3rd party library.
I would check those.
This was confirmed to be a memory leak in the idb-connector library that i was using. Link to github issue Here. Basically there was a C++ array that never had it's memory deallocated. A new version was added and the commit can viewed Here.
I'm a node noob and trying to understand how one would implement auto discovery in a node.js application. I'm going to use the cluster module and want each worker process to be kept up to date (and persistently connected to) the elasticache nodes.
Since there is no concept of shared memory (like PHP APC) would you have to have code that runs in each worker, that wakes up every X seconds and somehow updates the list of IP's and re-connects the memcache client?
How do people solve this today? Example code would be much appreciated.
Note that at this time, Auto Discovery is only available for cache clusters running the memcached engine.
For Cache Engine Version 1.4.14 or Higher you need to create a TCP/IP socket to the Cache Cluster Configuration Endpoint (or any Cache Node Endpoint) and send this command:
config get cluster
With Node.js you can use the net.Socket class to to that.
The reply consists of two lines:
The version number of the configuration information. Each time a node is added or removed from the cache cluster, the version number increases by one.
A list of cache nodes. Each node in the list is represented by a hostname|ip-address|port group, and each node is delimited by a space.
A carriage return and a linefeed character (CR + LF) appears at the end of each line.
Here you can find a more thorough description of how to add Auto Discovery to your client library.
Using the cluster module you need to store the same information in each process (i.e. child) and I would use "setInterval" per child to periodically check (e.g. every 60 seconds) the list of nodes and re-connect only if the list has changed (this should not happen very often).
You can optionally update the list on the master only and use "worker.send" to update the workers. This could keep all the processes running in a single server more in sync, but it would not help in a multi server architecture, so it is very important to use consistent hashing in order to be able to change the list of nodes and loose the "minimum" amount of keys stored in the memcached cluster.
I would use a global variable to store this kind of configuration.
Thinking twice you can use the AWS SDK for Node.js to get the list of ElastiCache Nodes (and that works for the Redis engine as well).
In that case the code would be something like:
var util = require('util'),
AWS = require('aws-sdk'),
Memcached = require('memcached');
global.AWS_REGION = 'eu-west-1'; // Just as a sample I'm using the EU West region
global.CACHE_CLUSTER_ID = 'test';
global.CACHE_ENDPOINTS = [];
global.MEMCACHED = null;
function init() {
AWS.config.update({
region: global.AWS_REGION
});
elasticache = new AWS.ElastiCache();
function getElastiCacheEndpoints() {
function sameEndpoints(list1, list2) {
if (list1.length != list2.length)
return false;
return list1.every(
function(e) {
return list2.indexOf(e) > -1;
});
}
function logElastiCacheEndpoints() {
global.CACHE_ENDPOINTS.forEach(
function(e) {
util.log('Memcached Endpoint: ' + e);
});
}
elasticache.describeCacheClusters({
CacheClusterId: global.CACHE_CLUSTER_ID,
ShowCacheNodeInfo: true
},
function(err, data) {
if (!err) {
util.log('Describe Cache Cluster Id:' + global.CACHE_CLUSTER_ID);
if (data.CacheClusters[0].CacheClusterStatus == 'available') {
var endpoints = [];
data.CacheClusters[0].CacheNodes.forEach(
function(n) {
var e = n.Endpoint.Address + ':' + n.Endpoint.Port;
endpoints.push(e);
});
if (!sameEndpoints(endpoints, global.CACHE_ENDPOINTS)) {
util.log('Memached Endpoints changed');
global.CACHE_ENDPOINTS = endpoints;
if (global.MEMCACHED)
global.MEMCACHED.end();
global.MEMCACHED = new Memcached(global.CACHE_ENDPOINTS);
process.nextTick(logElastiCacheEndpoints);
setInterval(getElastiCacheEndpoints, 60000); // From now on, update every 60 seconds
}
} else {
setTimeout(getElastiCacheEndpoints, 10000); // Try again after 10 seconds until 'available'
}
} else {
util.log('Error describing Cache Cluster:' + err);
}
});
}
getElastiCacheEndpoints();
}
init();