I'm still learning, but I've been trying to build a faucet bot for 5minutebitcoin.com,
using selenium-webdriver in nodejs. And I've been trying to figure out how to implement a check, for the 5 minute timer. So the script would check to see if the timer is present, if it is then sleep for the remaining countdown. And if ther's no timer left on the countdown then proceed, with the script. So far I have
const { Builder, By, Key, until } = require('selenium-webdriver');
let chrome = require('selenium-webdriver/chrome');
let address = 'btcAddress';
let balanceUrl = 'http://5minutebitcoin.com/check-balance/?the99btcbfaddress=';
let claimBtcUrl = 'http://5minutebitcoin.com/';
async function main() {
while (true) {
let driver = new Builder()
.forBrowser('chrome')
.setChromeOptions(new chrome.Options().headless())
.build();
let getBalance = await driver.get(balanceUrl + address);
driver.manage().setTimeouts({implicit: 5000});
balanceText = await driver.findElement(By.css('div.row.info'));
console.log(await balanceText.getText());
try {
let minuteTimerText = await driver.findElement(By.css('timer'));
// assert(minuteTimerText = true);
return true
}
catch(err) {
console.log("\n Timer not ready: ");
sleep(5)
}
let claimBtc = await driver.get(claimBtcUrl);
driver.manage().setTimeouts({implicit: 5000});
submit = await driver.findElement(By.name('claim_coins')).click()
console.log("\n Submitting to faucet using: " + address + "\n\n");
await driver.quit();
}
};
main()
At the moment it does:
Unpaid address balance: 72705 Satoshis
Address seniority: 9 days
Seniority bonus: 5% on all direct payouts
Time until next seniority level: 5 days
Submits per 24 hours: 15 / 48
Timer not ready:
Submitting to faucet using:
And loops.....
It's supposed to do:
Get the address balance and log.
Then check to see if the timer is active.
And if the timer is still active wait till the timer runs out.
solve the captcha
Then click claim coins.
And repeat every 5 minutes.
I'll be using tesseract-ocr to solve the captcha
Related
I want to implement session expire functionality. so when the user stays on that page for more than 15 min then it should show an alert that the session is expired now. also, meanwhile, if someone copies that URL and pastes it to another tab/browser/incognito (where he was already logged in) or refreshes the page, it should not retake the countdown. for example, if it's in the middle of the countdown let's say 5 min left to reach 15min then after copy-pasting URl on another tab at that time it should start from 5min left then 4 min left, and so on without retaking the countdown from 15 min.
I am not sure what is the best way to implement this in the MERN stack project (should use any library or cookie or local storage) also with security?
I tried sample implementation but it does not work for cross-browser or for incognito it retakes the session also after 15min if I refresh the page countdown timer again starts. someone has a better example or suggestion regarding how to implement the functionality is really appreciated TIA :-)
My dummy implementation example -
countdowntimer.tsx file
const history = useHistory();
const [countdownSessionExpired, setCountdownSessionExpired] = React.useState(false);
React.useEffect(() => {
const countDownTime = localStorage.getItem(COUNTER_KEY) || 10;
countDown(Number(countDownTime), () => {
console.log("countDownTime:", countDownTime);
setCountdownSessionExpired(true);
});
}, [history]);
return (
<>
{countdownSessionExpired ? (
<div>sessoin is expired</div>
) : (
<div>view the page</div>
)}
</>
);
};
==================================================================================================
countdowntimer.utils.tsx file
export const COUNTER_KEY = "myCounter";
export function countDown(i: number, callback: Function) {
const timer = setInterval(() => {
let minutes = i / 60;
let seconds = i % 60;
minutes = minutes < 10 ? 0 + minutes : minutes;
seconds = seconds < 10 ? 0 + seconds : seconds;
// document.getElementById("displayDiv")!.innerHTML = "Time (h:min:sec) left for this station is " + "0:" + minutes + ":" + seconds;
console.log("Time (h:min:sec) left for this station is:", seconds, i--);
if (i-- > 0) {
localStorage.setItem(COUNTER_KEY, String(i));
} else {
localStorage.removeItem(COUNTER_KEY);
clearInterval(timer);
callback();
}
}, 1000);
}```
I'm working on a node.js web server using express.js that should offer a dashboard to monitor database servers.
The architecture is quite simple:
a gatherer retrieves the information in a predefined interval and stores the data
express.js listens to user requests and shows a dashboard based on the stored data
I'm now wondering how to best implement the gatherer to make sure that it does not block the main loop and the simplest solution seems be to just use a setTimeout based approach but I was wondering what the "proper way" to architecture this would be?
Your concern is your information-gathering step. It probably is not as CPU-intensive as it seems. Because it's a monitoring app, it probably gathers information by contacting other machines, something like this.
async function gather () {
const results = []
let result
result = await getOracleMetrics ('server1')
results.push(result)
result = await getMySQLMetrics ('server2')
results.push(result)
result = await getMySQLMetrics ('server3')
results.push(result)
await storeMetrics(results)
}
This is not a cpu-intensive function. (If you were doing a fast Fourier transform on an image, that would be a cpu-intensive function.)
It spends most of its time awaiting results, and then a little time storing them. Using async / await gives you the illusion it runs synchronously. But, each await yields the main loop to other things.
You might invoke it every minute something like this. The .then().catch() stuff invokes it asynchronously.
setInterval (
function go () {
gather()
.then()
.catch(console.error)
}, 1000 * 60 * 60)
If you do actually have some cpu-intensive computation to do, you have a few choices.
offload it to a worker thread.
break it up into short chunks, with sleeps between them.
sleep = function sleep (howLong) {
return new Promise(function (resolve) {
setTimeout(() => {resolve()}, howLong)
})
}
async function gather () {
for (let chunkNo = 0; chunkNo < 100; chunkNo++) {
doComputationChunk(chunkNo)
await sleep(1)
}
}
That sleep() function yields to the main loop by waiting for a timeout to expire.
None of this is debugged, sorry to say.
For recurring tasks I prefer to use node-scheduler and shedule the jobs on app start-up.
In case you don't want to run CPU-expensive tasks in the main-thread, you can always run the code below in a worker-thread in parallel instead of the main thread - see info here
Here are two examples, one with a recurrence rule and one with interval in minutes using a cron expression:
app.js
let mySheduler = require('./mysheduler.js');
mySheduler.sheduleRecurrence();
// And/Or
mySheduler.sheduleInterval();
mysheduler.js
/* INFO: Require node-schedule for starting jobs of sheduled-tasks */
var schedule = require('node-schedule');
/* INFO: Helper for constructing a cron-expression */
function getCronExpression(minutes) {
if (minutes < 60) {
return `*/${minutes} * * * *`;
}
else {
let hours = (minutes - minutes % 60) / 60;
let minutesRemainder = minutes % 60;
return `*/${minutesRemainder} */${hours} * * *`;
}
}
module.exports = {
sheduleRecurrence: () => {
// Schedule a job # 01:00 AM every day (Mo-Su)
var rule = new schedule.RecurrenceRule();
rule.hour = 01;
rule.minute = 00;
rule.second = 00;
rule.dayOfWeek = new schedule.Range(0,6);
var dailyJob = schedule.scheduleJob(rule, function(){
/* INFO: Put your database-ops or other routines here */
// ...
// ..
// .
});
// INFO: Verbose output to check if job was scheduled:
console.log(`JOB:\n${dailyJob}\n HAS BEEN SCHEDULED..`);
},
sheduleInterval: () => {
let intervalInMinutes = 60;
let cronExpressions = getCronExpression(intervalInMinutes);
// INFO: Define unique job-name in case you want to cancel it
let uniqueJobName = "myIntervalJob"; // should be unique
// INFO: Schedule the job
var job = schedule.scheduleJob(uniqueJobName,cronExpressions, function() {
/* INFO: Put your database-ops or other routines here */
// ...
// ..
// .
})
// INFO: Verbose output to check if job was scheduled:
console.log(`JOB:\n${job}\n HAS BEEN SCHEDULED..`);
}
}
In case you want to cancel a job, you can use its unique job-name:
function cancelCronJob(uniqueJobName) {
/* INFO: Get job-instance for canceling scheduled task/job */
let current_job = schedule.scheduledJobs[uniqueJobName];
if (!current_job || current_job == 'undefinded') {
/* INFO: Cron-job not found (already cancelled or unknown) */
console.log(`CRON JOB WITH UNIQUE NAME: '${uniqueJobName}' UNDEFINED OR ALREADY CANCELLED..`);
}
else {
/* INFO: Cron-job found and cancelled */
console.log(`CANCELLING CRON JOB WITH UNIQUE NAME: '${uniqueJobName}`)
current_job.cancel();
}
};
In my example the recurrence and the interval are hardcoded, obviously you can also pass the recurrence-rules or the interval as argument to the respective function..
As per your comment:
'When looking at the implementation of node-schedule it feels like a this layer on top of setTimeout..'
Actually, node-schedule is using long-timeout -> https://www.npmjs.com/package/long-timeout so you are right, it's basically a convenient layer on top of timeOuts
I wrote up a simple load testing script that runs N number of hits to and HTTP endpoint over M async parallel lanes. Each lane waits for the previous request to finish before starting a new request. The script, for my specific use-case, is randomly picking a numeric "width" parameter to add to the URL each time. The endpoint returns between 200k and 900k of image data on each request depending on the width parameter. But my script does not care about this data and simply relies on garbage collection to clean it up.
const fetch = require('node-fetch');
const MIN_WIDTH = 200;
const MAX_WIDTH = 1600;
const loadTestUrl = `
http://load-testing-server.com/endpoint?width={width}
`.trim();
async function fetchAll(url) {
const res = await fetch(url, {
method: 'GET'
});
if (!res.ok) {
throw new Error(res.statusText);
}
}
async function doSingleRun(runs, id) {
const runStart = Date.now();
console.log(`(id = ${id}) - Running ${runs} times...`);
for (let i = 0; i < runs; i++) {
const start = Date.now();
const width = Math.floor(Math.random() * (MAX_WIDTH - MIN_WIDTH)) + MIN_WIDTH;
try {
const result = await fetchAll(loadTestUrl.replace('{width}', `${width}`));
const duration = Date.now() - start;
console.log(`(id = ${id}) - Width ${width} Success. ${i+1}/${runs}. Duration: ${duration}`)
} catch (e) {
const duration = Date.now() - start;
console.log(`(id = ${id}) - Width ${width} Error fetching. ${i+1}/${runs}. Duration: ${duration}`, e)
}
}
console.log(`(id = ${id}) - Finished run. Duration: ` + (Date.now() - runStart));
}
(async function () {
const RUNS = 200;
const parallelRuns = 10;
const promises = [];
const parallelRunStart = Date.now();
console.log(`Running ${parallelRuns} parallel runs`)
for (let i = 0; i < parallelRuns; i++) {
promises.push(doSingleRun(RUNS, i))
}
await Promise.all(promises);
console.log(`Finished parallel runs. Duration ${Date.now() - parallelRunStart}`)
})();
When I run this in Node 14.17.3 on my MacBook Pro running MacOS 10.15.7 (Catalina) with even a modest parallel lane number of 3, after about 120 (x 3) hits of the endpoint the following happens in succession:
Console output ceases in the terminal for the script, indicating the script has halted
Other applications such as my browser are unable to make network connections.
Within 1 - 2 mins other applications on my machine begin to slow down and eventually freeze up.
My entire system crashes with a kernel panic and the machine reboots.
panic(cpu 2 caller 0xffffff7f91ba1ad5): userspace watchdog timeout: remoted connection watchdog expired, no updates from remoted monitoring thread in 60 seconds, 30 checkins from thread since monitoring enabled 640 seconds ago after loadservice: com.apple.logd, total successful checkins since load (642 seconds ago): 64, last successful checkin: 10 seconds ago
service: com.apple.WindowServer, total successful checkins since load (610 seconds ago): 60, last successful checkin: 10 seconds ago
I can very easily stop of the progression of these symptoms by doing a Ctrl+C in the terminal of my script and force quitting it. Everything quickly gets back to normal. And I can repeat the experiment multiple times before allowing it to crash my machine.
I've monitored Activity Monitor during the progression and there is very little (~1%) CPU usage, memory usage reaches up to maybe 60-70mb, though it is pretty evident that the Network activity is peaking during the script's run.
In my search for others with this problem there were only two Stack Overflow articles that came close:
node.js hangs other programs on my mac
Node script causes system freeze when uploading a lot of files
Anyone have any idea why this would happen? It seems very dangerous that a single app/script could so easily bring down a machine without being killed first by the OS.
I have this code, which is supposed to send a message and add to a variable every 10 minutes
function btcb() {
const embed = new Discord.MessageEmbed()
.setColor('#FF9900')
.setTitle("Bitcoin block #"+bx.blocks.btc+" was mined")
.setAuthor('Block mined', 'https://cdn.discordapp.com/emojis/710590499991322714.png?v=1')
client.channels.cache.get(`710907679186354358`).send(embed)
bx.blocks.btc = bx.blocks.btc+1
}
setInterval(btcb,600000)
But it actually does it every 2-3 minutes instead. What am I doing wrong?
Youre better off setting the interval to 1 second and counting 600 seconds before resseting:
let sec = 0;
function btcb() {
if(sec++<600) return;
sec = 0;
const embed = new Discord.MessageEmbed()
.setColor('#FF9900')
.setTitle("Bitcoin block #"+bx.blocks.btc+" was mined")
.setAuthor('Block mined', 'https://cdn.discordapp.com/emojis/710590499991322714.png?v=1')
client.channels.cache.get(`710907679186354358`).send(embed)
bx.blocks.btc = bx.blocks.btc+1
}
setInterval(btcb,1000)
I am learning azure functions and durable functions. Looking at the examples for the monitor pattern:
https://learn.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-monitor
public static async Task Run(DurableOrchestrationContext monitorContext, ILogger log)
{
MonitorRequest input = monitorContext.GetInput<MonitorRequest>();
if (!monitorContext.IsReplaying) { log.LogInformation($"Received monitor request. Location: {input?.Location}. Phone: {input?.Phone}."); }
VerifyRequest(input);
DateTime endTime = monitorContext.CurrentUtcDateTime.AddHours(6);
if (!monitorContext.IsReplaying) { log.LogInformation($"Instantiating monitor for {input.Location}. Expires: {endTime}."); }
while (monitorContext.CurrentUtcDateTime < endTime)
{
// Check the weather
if (!monitorContext.IsReplaying) { log.LogInformation($"Checking current weather conditions for {input.Location} at {monitorContext.CurrentUtcDateTime}."); }
bool isClear = await monitorContext.CallActivityAsync<bool>("E3_GetIsClear", input.Location);
if (isClear)
{
// It's not raining! Or snowing. Or misting. Tell our user to take advantage of it.
if (!monitorContext.IsReplaying) { log.LogInformation($"Detected clear weather for {input.Location}. Notifying {input.Phone}."); }
await monitorContext.CallActivityAsync("E3_SendGoodWeatherAlert", input.Phone);
break;
}
else
{
// Wait for the next checkpoint
var nextCheckpoint = monitorContext.CurrentUtcDateTime.AddMinutes(30);
if (!monitorContext.IsReplaying) { log.LogInformation($"Next check for {input.Location} at {nextCheckpoint}."); }
await monitorContext.CreateTimer(nextCheckpoint, CancellationToken.None);
}
}
log.LogInformation("Monitor expiring.");
}
If that was changed to an infinite monitor?
Won't the history of the context growth to where it causes a problem?
Am I correct that whenever one call await CreateTimer, the current method awaits until the timer is meet but a replay is also executed when the timer trigger?
What then the next time, is there the initial method + 2 replays running?
What happens if the platform moves the function to a new host, does it then cancel that initial method and it will continue due to a replay is played on the new host?