WebUSB on Android Chrome - webusb

I have a website that connects to a label printer using WebUSB and prints labels. Everything works well on the computer, but unfortunately it does not work on Android. Android finds my printer "LP 433", but after nothing happens, it should print. I connect to the printer using the OTG cable. Google Chrome version: 83.
On other browsers WebUSB does not respond in any way, except firefox, it directly says that it does not support WebUSB. My site work on https.
async findDevices() {
try {
const navi: any = navigator;
const devices = await navi.usb.getDevices();
console.log(devices);
devices.forEach(dev => {
console.log(dev);
});
const device = await navi.usb.requestDevice({filters: [{vendorId: 15923}]});
await device.open(); // Begin a session.
await device.selectConfiguration(1); // Select configuration #1 for the device.
await device.claimInterface(0); // Request exclusive control over interface #2.
await device.controlTransferOut({
requestType: 'vendor',
recipient: 'interface',
request: 0x01, // vendor-specific request: enable channels
value: 0x0013, // 0b00010011 (channels 1, 2 and 5)
index: 0x0001 // Interface 1 is the recipient
})
.then(() => device.transferIn(1, 64)) // Waiting for 64 bytes of data from endpoint #5.
.then(result => {
const decoder = new TextDecoder();
console.log('Received: ' + decoder.decode(result.data));
// document.getElementById('target').innerHTML = 'Received: ' + decoder.decode(result.data);
})
.catch(error => {
console.log(error);
});
const imgData: any = await this.toPrintFormat();
for (let p = 0; p < this.pcs; p++) {
for (let i = 0; i < imgData.length; i++) {
// setTimeout(() => {
const string2 =
'sLABEL,' + imgData[i].width + ',' + imgData[i].height + '\n' +
'sGAP,0,0\n' +
'sTPHY,0\n' +
'sDENSITY,7\n' +
'sSPEED,' + this.speedp + '\n' +
'sTHERMAL,' + this.thermop + '\n' +
'sDIRECTION,1\n' +
'sORIGIN,0,0\n' +
'wSAVEB64,' + imgData[i].src.length + ',\"WLP000\",' + imgData[i].src + '\n' +
'wLOADIMG,0,' + 0 + ',1,\"WLP000\"\n' +
'wPRINT,1\n';
console.log(string2);
const encoder = new TextEncoder();
const data = encoder.encode(string2);
device.transferOut(2, data)
.catch(error => {
console.log(error);
});
}
}
// document.getElementById('target').innerHTML = 'Received: ' + decoder.decode(result.data);
} catch (error) {
// document.getElementById('target').innerHTML = error;
}
}

Related

how to use npm cli-progress in ssh2-sftp-client

i have a project with npm ssh2-sftp-client to download files from remote server,so i want to display downloading progress displaying in console.Downloading files works fine, but i do not know how to use cli-progress to display downloading progress while the files are downloading .
function getConnect(ip, name, pwd, remotepath, localpath) {
const sftp = new SftpClient();
sftp.connect({
host: ip,
port: 22,
username: name,
password: pwd
}).then(async () => {
const files = await sftp.list(remotepath, '.');
for (var j = 0; j < files.length; j++) {
var e =files[j];
await sftp.fastGet(remotepath + "/" + e.name, localpath + "\\" + e.name);
}
});
I have revised, hopefully it will be better
function getConnect(ip, name, pwd, remotepath, localpath) {
const sftp = new SftpClient();
sftp.connect({
host: ip,
port: 22,
username: name,
password: pwd
}).then(async () => {
const files = await sftp.list(remotepath, '.');
for (var j = 0; j < files.length; j++) {
var e =files[j];
//=================================================
const Throttle = require('throttle');
const progress = require('progress-stream');
const throttleStream = new Throttle(1); // create a "Throttle " instance that reads at 1 bps
const progressStream = progress({
length: e.size,
time: 100, // ms
});
progressStream.on('progress', (progress) => {
process.stdout.write("\r" + " [" +e.name+"] downloaded ["+progress.percentage.toFixed(2)+"%]");
});
const outStream = createWriteStream(localpath);
throttleStream.pipe(progressStream).pipe(outStream);
try {
await sftp.get(remotepath + "/" + e.name, throttleStream, { autoClose: false });
} catch {
console.log('sftp error', e);
} finally {
await sftp.end();
}
}
}
}
i followed the suggestion from #Abbas Agus Basari like:
await sftp.fastGet(secondPath + "/" + e.name, localPath + "\\" + e.name, {
step: step=> {
const percent = Math.floor((step / e.size) * 100);
process.stdout.write("\r" + "【"+e.name+"】downloaded【"+percent+'%】');
}
});
and run like:
[1]: https://i.stack.imgur.com/97sRi.png
i downloaded two files from remote server ,but the console only could see one file 100%,the other stopped at 59%

Proper async/await function

I am attempting to run a bot that scrapes Amazon (using amazon-buddy) for certain products (using array of ASINs) and checks the price. If the price is not 0, it should be sending a message on discord. I currently have this set to run every 30 seconds and it's working, but there are times where it seems like each element is not waiting for the previous one to get a response in the forEach loop and my function doesn't seem to be correct (I'm still trying to understand async/await functions properly).
Is there a better way to run this so that each element waits for the previous element to get scraped before moving on to the next one and THEN run the loop again after 30 seconds?
(function() {
var c = 0;
var timeout = setInterval(function() {
const checkStock = (async () => {
config.items.itemGroup.forEach(element => {
console.log('Checking stock on ' + element)
try {
const product_by_asin = await amazonScraper.asin({ asin: element });
console.log(product_by_asin)
const price = product_by_asin.result[0].price.current_price
const symbol = product_by_asin.result[0].price.symbol
const asin = product_by_asin.result[0].asin
const title = product_by_asin.result[0].title
const url = product_by_asin.result[0].url
const image = product_by_asin.result[0].main_image
if (price != 0) {
const inStockResponse = {
color: 0x008000,
title: title + ' is in stock!',
url: url,
author: {
name: config.botName,
icon_url: config.botImg,
url: config.botUrl
},
description: '<#767456705306165298>, click the tite to go purchase!\n\n' +
'Price: ' + symbol + price,
thumbnail: {
url: image
},
timestamp: new Date()
}
message.channel.send({embed: inStockResponse });
console.log(title + ' (' + asin + ') IS available!')
} else {
console.log(title + ' (' + asin + ') IS NOT available!')
}
} catch (error) {
console.log(error);
}
});
checkStock()
});
console.log('Counter: ' + c)
c++;
}, 30000);
})();
You could use a for...of loop which can wait for each iteration to finish:
async function checkItems(items) {
// Check all items, wait for each to complete.
for (const item of items) {
try {
const product_by_asin = await amazonScraper.asin({ asin: item });
console.log(product_by_asin);
const price = product_by_asin.result[0].price.current_price;
const symbol = product_by_asin.result[0].price.symbol;
const asin = product_by_asin.result[0].asin;
const title = product_by_asin.result[0].title;
const url = product_by_asin.result[0].url;
const image = product_by_asin.result[0].main_image;
if (price != 0) {
const inStockResponse = {
color: 0x008000,
title: title + " is in stock!",
url: url,
author: {
name: config.botName,
icon_url: config.botImg,
url: config.botUrl,
},
description:
"<#767456705306165298>, click the tite to go purchase!\n\n" +
"Price: " +
symbol +
price,
thumbnail: {
url: image,
},
timestamp: new Date(),
};
// NOTE: you might want to wait for this too, the error
// currently isn't being handled like this either.
message.channel.send({ embed: inStockResponse });
console.log(title + " (" + asin + ") IS available!");
} else {
console.log(title + " (" + asin + ") IS NOT available!");
}
} catch (err) {
console.log(err);
}
}
// Wait 30s and check again.
setTimeout(() => checkItems(items), 30000);
}
checkItems(config.items.itemGroup);

Proper calling sequence to have Node.js disconnect from database after processing input file

Creating a very simple Node.js utility to process each record separately in a text file (line by line), but it is surprisingly difficult to handle the following scenario due to the inherent async world of Node:
Open connection to database
Read each line of a text file
Based on conditions within the processed text of the line, look up a record in the database
Upon completion of reading the text file, close the
database connection
The challenge I face is that the text file is read in line-by-line (using the 'readline' module), attaching a listener to the 'line' event emitted by the module. The lines of the file are all processed rapidly and the queries to the database are queued up. I have tried many approaches to essentially create a synchronous process to no avail. Here is my latest attempt that is definitely full of async/await functions. Being a longtime developer but new to Node.js I know I am missing something simple. Any guidance will be greatly appreciated.
const { Pool, Client } = require('pg')
const client = new Client({
user: '*****',
host: '****',
database: '*****',
password: '******#',
port: 5432,
})
client.connect()
.then(() => {
console.log("Connected");
console.log("Processing file");
const fs = require('fs');
const readline = require('readline');
const instream = fs.createReadStream("input.txt");
const outstream = new (require('stream'))();
const rl = readline.createInterface(instream, outstream);
rl.on('line', async function (line) {
var callResult;
if (line.length > 0) {
var words = line.replace(/[^0-9a-z ]/gi, '').split(" ");
var len = words.length;
for (var i = 0; i < words.length; i++) {
if (words[i].length === 0) {
words.splice(i, 1);
i--;
} else {
words[i] = words[i].toLowerCase();
}
}
for (var i = 0; i < words.length; i++) {
if (i <= words.length - 3) {
callResult = await isKeyPhrase(words[i].trim() + " " + words[i + 1].trim() + " " + words[i + 2].trim());
if (!callResult) {
callResult = await isKeyPhrase(words[i].trim() + " " + words[i + 1].trim());
if (!callResult) {
callResult = await isKeyPhrase(words[i].trim());
}
};
} else if (i <= words.length - 2) {
callResult = await isKeyPhrase(words[i].trim() + " " + words[i + 1].trim());
if (!callResult ) {
callResult = await isKeyPhrase(words[i].trim());
};
} else if (i < words.length) {
callResult = await isKeyPhrase(words[i].trim());
}
}
} // (line.length > 0)
});
rl.on('close', function (line) {
console.log('done reading file.');
// stubbed out because queries are still running
//client.end();
});
}).catch( (err) => {
console.error('connection error', err.stack);
});
async function isKeyPhrase(keyPhraseText) {
var callResult = false;
return new Promise(async function(resolve, reject) {
const query = {
name: 'get-name',
text: 'select KP.EntryID from KeyPhrase KP where (KP.KeyPhraseText = $1) and (Active = true)',
values: [keyPhraseText],
rowMode: 'array'
}
// promise
await client.query(query)
.then(result => {
if (result.rowCount == 1) {
console.log(`Key phrase '${keyPhraseText}' found in table with Phase ID = ${result.rows}`);
calResult = true;
}
}).catch(e => {
console.error(e.stack)
console.log(e.stack);
reject(e);
});
resolve(callResult);
});
}
welcome to StackOverflow. :)
Indeed there's no (sensible) way to read a file synchronously while trying to interact the data per-line with a database. There's no feasible way if the file is bigger than probably 1/8th of your memory.
This doesn't mean however there's no way or writing a sane code for this. The only problem is that standard node streams (including readline) do not wait for async code.
I'd recommend using scramjet, a functional stream programming framework, pretty much designed for you use case (disclamer: I'm the author). Here's how the code would look like:
const { Pool, Client } = require('pg')
const { StringStream } = require("scramjet");
const client = new Client({
user: '*****',
host: '****',
database: '*****',
password: '******#',
port: 5432,
})
client.connect()
.then(async () => {
console.log("Connected, processing file");
return StringStream
// this creates a "scramjet" stream from input.
.from(fs.createReadStream("input.txt"))
// this splits fs line by line
.lines()
// the next line is just to show when the file is fully read
.use(stream => stream.whenEnd.then(() => console.log("done reading file.")))
// this splits the words like the first "for" loop in your code
.map(line => line.toLowerCase().replace(/[^0-9a-z ]+/g, '').split(" "))
// this one gets rid of empty lines (i.e. no words)
.filter(line => line.length > 0)
// this splits the words like the first "for" loop in your code
.map(async words => {
for (var i = 0; i < words.length; i++) {
const callResult = await isKeyPhrase(words.slice(i, i + 3).join(" "));
if (callResult) return callResult;
}
})
// this runs the above list of operations to the end and returns a promise.
.run();
})
.then(() => {
console.log("done processing file.");
client.end();
})
.catch((e) => {
console.error(e.stack);
});
async function isKeyPhrase(keyPhraseText) {
const query = {
name: 'get-name',
text: 'select KP.EntryID from KeyPhrase KP where (KP.KeyPhraseText = $1) and (Active = true)',
values: [keyPhraseText],
rowMode: 'array'
};
const result = await client.query(query);
if (result.rowCount > 0) {
console.log(`Key phrase '${keyPhraseText}' found in table with Phase ID = ${result.rows}`);
return true;
}
return false;
}
I compacted and optimized your code in some places, but in general this should get you what you want - scramjet adds the asynchronous mode for each operation and will wait until all the operations are ended.

Batch requests placed via nodejs request.get using rxjs

I am currently using the following function to create a Promise from the result of calling request.get:
function dlPromiseForMeta(meta) {
return new Promise(function (resolve, reject) {
meta.error = false;
var fileStream = fs.createWriteStream(meta.filePath);
fileStream.on('error', function (error) {
meta.error = true;
console.log('filesystem ' + meta.localFileName + ' ERROR: ' + error);
console.log('record: ' + JSON.stringify(meta));
reject(meta);
});
fileStream.on('close', function () {
resolve(meta);
});
request.get({
uri: meta.url,
rejectUnauthorized: false,
followAllRedirects: true,
pool: {
maxSockets: 1000
},
timeout: 10000,
agent: false
})
.on('socket', function () {
console.log('request ' + meta.localFileName + ' made');
})
.on('error', function (error) {
meta.error = true;
console.log('request ' + meta.localFileName + ' ERROR: ' + error);
console.log('record: ' + JSON.stringify(meta));
reject(meta);
})
.on('end', function () {
console.log('request ' + meta.localFileName + ' finished');
fileStream.close();
})
.pipe(fileStream);
});
}
This works fine except when I am trying to call it too many times, as in the example below, where imagesForKeywords returns an rxjs Observable:
imagesForKeywords(keywords, numberOfResults)
.mergeMap(function (meta) {
meta.fileName = path.basename(url.parse(meta.url).pathname);
meta.localFileName = timestamp + '_' + count++ + '_' + meta.keyword + '_' + meta.source + path.extname(meta.fileName);
meta.filePath = path.join(imagesFolder, meta.localFileName);
return rxjs.Observable.fromPromise(dlPromiseForMeta(meta))(meta);
});
I start getting ESOCKETTIMEDOUT errors when the source observable becomes sufficiently large.
So what I would like to do is somehow batch what happens in mergeMap for every, say, 100 entries... so I do those 100 in parallel, and each batch serially, and then merge them at the end.
How can I accomplish this using rxjs?
I think the simplest thing to use is bufferTime() which triggers after a certain number of ms but also has a parameter at the end for count.
Using a timeout seems useful, in case there's a stream pattern that does not reach the batch limit in a reasonable time.
If that does not fit your use-case, comment me with some more details and I will adjust accordingly.
Your code will look like this,
bufferTime as described above
forkjoin - run the buffer contents in parallel and emit when all return
mergeMap - coalesce the results
imagesForKeywords(keywords, numberOfResults)
.mergeMap(function (meta) {
meta.fileName = path.basename(url.parse(meta.url).pathname);
meta.localFileName = timestamp + '_' + count++ + '_' + meta.keyword + '_' + meta.source + path.extname(meta.fileName);
meta.filePath = path.join(imagesFolder, meta.localFileName);
return meta;
})
.bufferTime(maxTimeout, null, maxBatch)
.mergeMap(items => rxjs.Observable.forkJoin(items.map(dlPromiseForMeta)))
.mergeMap(arr => rxjs.Observable.from(arr))
Here's a runnable mockup to show it working. Have commented out the last mergeMap to show the buffering.
I have assumed a couple of things,
imagesForKeywords breaks keywords into observable stream of keyword
there is one keyword per dlPromiseForMeta call
// Some mocking
const imagesForKeywords = (keywords, numberOfResults) => {
return Rx.Observable.from(keywords.map(keyword => { return {keyword} }))
}
const dlPromiseForMeta = (meta) => {
return Promise.resolve(meta.keyword + '_image')
}
// Compose meta - looks like it can run at scale, since is just string manipulations.
const composeMeta = meta => {
// meta.fileName = path.basename(url.parse(meta.url).pathname);
// meta.localFileName = timestamp + '_' + count++ + '_' + meta.keyword + '_' + meta.source + path.extname(meta.fileName);
// meta.filePath = path.join(imagesFolder, meta.localFileName);
return meta;
}
const maxBatch = 3
const maxTimeout = 50 //ms
const bufferedPromises = (keywords, numberOfResults) =>
imagesForKeywords(keywords, numberOfResults)
.map(composeMeta)
.bufferTime(maxTimeout, null, maxBatch)
.mergeMap(items => Rx.Observable.forkJoin(items.map(dlPromiseForMeta)))
//.mergeMap(arr => Rx.Observable.from(arr))
const keywords = ['keyw1', 'keyw2', 'keyw3', 'keyw4', 'keyw5', 'keyw6', 'keyw7'];
const numberOfResults = 1;
bufferedPromises(keywords, numberOfResults)
.subscribe(console.log)
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.5.6/Rx.js"></script>

Angular download file from byte array sent from node.js

I think I'm very close to what I want to do. I have the following api get method in node.js that is retrieving a file varbinary(MAX) from an SQL Server database. It was converted from a base64 encoded string before inserted so the Content Type information was stripped from the string.
node.js
router.get('/getFile', (req, res) => {
console.log("Calling getFile for file " + req.query.serialNumber + ".")
var serialNumber = req.query.serialNumber;
let request = new sql.Request(conn);
request.query('SELECT FileName + \'.\' + FileExtension AS \'File\', FileType, ContentType, SerialNumber, Chart ' +
'FROM dbo.ChangeFiles ' +
'WHERE SerialNumber = ' + serialNumber)
.then(function (recordset) {
log("Successfully retrieved file " + recordset[0].SerialNumber + " from database.");
log("Length of blob " + recordset[0].File + " is " + recordset[0].Chart.length)
res.status(200);
res.setHeader('Content-Type', recordset[0].ContentType);
res.setHeader('Content-Disposition', 'attachment;filename=' + recordset[0].File);
res.end(Buffer.from((recordset[0].Chart)));
}).catch(function (err) {
log(err);
res.status(500).send("Issue querying database!");
});
});
That works fine, but what to do in Angular to retrieve it and prompt for a download for the user has not been clear for me, nor has there been a lot as far as help/resources online. Here is what I have so far in my service class.
fileDownload.service.ts
downloadFile(serialNumber: string): Observable<any> {
return this.http.get(this.baseURL + '/getFile', { params: { serialNumber: serialNumber } })
.map(this.extractFile);
}
private extractFile(response: Response) {
const file = new Blob([response.blob]);
FileSaver.saveAs(file);
// const url = window.URL.createObjectURL(file);
// window.open(url);
return file;
}
As you can see I've tried a couple of approaches. The commented out portion of the extractFile method didn't work at all, and using the FileSaver.saveAs function produces a file download of an unknown type, so the headers sent from node.js didn't seem to affect the file itself.
Would someone be able to advise how to proceed in Angular with what is successfully being sent from node.js so that I can successfully download the file, regardless of type?
Thanks so much in advance.
I got it working afterall. I had to rework the api call so that it sent all of the file information separately so that the MIME type, and file name can be assigned to the file on the client side in the component class. For some reason when I tried to do so all in the api, it wouldn't work so that was my work around. So here is what works for me.
node.js api
router.get('/getFile', (req, res) => {
console.log("Calling getFile for file " + req.query.serialNumber + ".")
var serialNumber = req.query.serialNumber;
let request = new sql.Request(conn);
request.query('SELECT FileName + \'.\' + FileExtension AS \'File\', FileType, ContentType, SerialNumber, Chart ' +
'FROM dbo.ChangeFiles ' +
'WHERE SerialNumber = ' + serialNumber)
.then(function (recordset) {
log("Successfully retrieved file " + recordset[0].SerialNumber + " from database.");
log("Length of blob " + recordset[0].File + " is " + recordset[0].Chart.length)
res.send(recordset[0]);
}).catch(function (err) {
log(err);
res.status(500).send("Issue querying database!");
});
});
component class
downloadFile(serialNumber: string): void {
this.changeService.downloadFile(serialNumber).subscribe((res: any) => {
const ab = new ArrayBuffer(res.Chart.data.length);
const view = new Uint8Array(ab);
for (let i = 0; i < res.Chart.data.length; i++) {
view[i] = res.Chart.data[i];
}
const file = new Blob([ab], { type: res.ContentType });
FileSaver.saveAs(file, res.File);
console.log(res);
});
}
service class
downloadFile(serialNumber: string): Observable<any> {
return this.http.get(this.baseURL + '/getFile', { params: { serialNumber: serialNumber } })
.map(this.extractFile);
}
private extractFile(response: Response) {
// const file = new Blob([response.blob]);
// FileSaver.saveAs(file);
// const url = window.URL.createObjectURL(file);
// window.open(url);
const body = response.json();
return body || {};
}
Update your code to call subscribe instead of map

Resources