I have an Amazon EC2 instance running and use twurl to connect to the Twitter /statuses/filter.json streaming API to collect various sporting tweets.
It all works pretty nicely to be honest, but as a novice I cannot for the life of me figure out how to only run the process for say 100 tweets, or 5 minutes at a time.
In the Ubuntu terminal, I run the following command:
sudo bash stream.sh
Which calls the bash script containing the following code:
twurl -t -d track=NHL language=en -H stream.twitter.com /1.1/statuses/filter.json > tweets.json
If I manually end the process by pressing CTRL+C, this works perfectly. However, what I would really like is to be able to collect 100 tweets at certain points of the day. Any ideas how I may build this in? I've Googled it but have so far come up short...
Have worked it out!
Ended up being massively simple:
timeout 5m bash stream.sh
Related
I have used a command for some calculations on 3rd March 2021 within the time range 15:37:00 (input file creation time) to 16:17:00 (output file generating time).
Unfortunately, I lost the command (from writing) and can not remember now.
Is there any way to get it from history? As history only give last 1000 command which is not getting to that time period.
If anyone can help me here will be very beneficial.
Thank you in advance.
What do you mean by losing the command? Do you mean that you deleted the commands history? It seems like you are hitting the hard limit set in your env? try to increase it.
echo $HISTSIZE
I mean, I cannot remember the command by myself and can not find any copy of it written anywhere. I have not deleted anything by myself. The machine might have default settings of keeping history for users (I do not know exactly).
As a long time (almost 6 months) has been passed, I can not see it in the command history.
$ echo $HISTSIZE
1000
$ history
results in the last 1000 commands I have used.
Then, I have tried,
$HISTSIZE=15000
$ echo $HISTSIZE
15000
$ history
still results in ~1000 commands from history.
Is it possible to get the list of commands I have used on 3rd March 2021?
Is it possible to define a list of URLs that the ZAP baseline (https://www.zaproxy.org/docs/docker/baseline-scan/) scan should scan? The default behaviour is that it runs for one minute. I only want 20 defined URLs to be scanned.
It the moment I use the docker container with the following parameters:
docker run -t owasp/zap2docker-stable zap-baseline.py -t https://www.example.com
It will run for up_to one minute (by default). If your app has only 20 urls then it will hopefully find them much faster than that. If it takes 2 seconds to find them then thats how long it will take to find them. The passive scanning will take a bit longer, but hopefully not too long.
This question is basically a duplicate of this one, except that the accepted answer on that question was, "it's not actually slower, you just weren't running the timing command correctly."
In my case, it actually is slower! :)
I'm on Windows 10. Here's the output from PowerShell's Measure-Command (the TotalMilliseconds line represents wall-clock time):
PS> Measure-Command {npm --version}
Days : 0
Hours : 0
Minutes : 0
Seconds : 1
Milliseconds : 481
Ticks : 14815261
TotalDays : 1.71472928240741E-05
TotalHours : 0.000411535027777778
TotalMinutes : 0.0246921016666667
TotalSeconds : 1.4815261
TotalMilliseconds : 1481.5261
A few other numbers, for comparison:
'{.\node_modules.bin\mocha}': 1300ms
'npm run test' (just runs mocha): 3300ms
npm help: 1900ms.
the node interpreter itself is ok: node -e 0: 180ms
It's not just npm that's slow... mocha reports that my tests only take 42ms, but as you can see above, it takes 1300ms for mocha to run those 42ms of tests!
I've had the same trouble. Do you have Symantec Endpoint Protection? Try disabling Application and Device Control in Change Settings > Client Management > General > Enable Application and Device Control.
(You could disable SEP altogether; for me the command is: "%ProgramFiles(x86)%\Symantec\Symantec Endpoint Protection\smc.exe" -stop.)
If you have some other anti-virus, there's likely a way to disable it as well. Note that closing the app in the Notification area might not stop the virus protection. The problem is likely with any kind of realtime protection that scans a process as it starts. Since node and git are frequently-invoked short-running processes, this delay is much more noticeable.
In Powershell, I like to measure the performance of git status, both before and after that change: Measure-Command { git status }
I ran into this problem long ago, I think it was an extension that I had. I use Visual Studio Code, and when it has no extensions and running bash:
//GIT Bash Configuration
"terminal.integrated.shell.windows": "C:\\Program Files\\Git\\bin\\bash.exe",
it actually flies, I use both OS, so I can tell the difference. Try using different tools and disabling some.
And if that still doesn't work, check your antivirus, maybe it's slowing down the process?
Been googling this all day, with no luck. Decided to uninstall Java to see what would happen and bingo, solved my problem. I know this is an old thread, but I found myself coming back to it so many times to see if I missed anything.
off topic:
Got to figure out how to get Java working now 🤦
Didn't know about Measure-Command, so I'll be using that in the future!
I had this problem. When I tried to run an application of my job in my home, I realized that in my job's laptop the app starts on 2 minutes but in my personal notebook it tooked 5 minutes or more.
After trying some possible solutions, finally I found the problem was that I installed Git Bash in my D drive partition which is a HDD. When I re-installed in C drive whichs is a SSD then the app started faster. However, I also moved Node.js to C drive to prevent another issues.
I've written a Node.JS server which I would like to benchmark. It has the following components that I would like to benchmark separately:
- socket.io: how many continuous connections can it accept and process (where is the saturation point)
- redis: the same as above
- express: don't want to benchmark it
I know there is quite some (not a lot) documentation about that on the internet, but I don't like to reinvent the wheel, plus I don't want to actually spend countless hours of time trying some solution that turns out to be wrong for the job.
This is why I'm asking you guys here: what should I use to get a number/graph (whatever) on the number of simultaneous connections that server can process simultaneuosly without being bogged down? It would also be nice to monitor cpu, memory and swap of the process (yeah, yeah I know I can use countless of techniques or write my own script, but maybe something like that exists already).
I'm not looking for an answer where you'll paste a link to some solution that I already know it exists, I would like an answer in such a way, so that the person giving it has some actual experience and can really make a point or two and point me in the right direction.
Thank you
You can use ApacheBench ab to test the load that your server may take - man page
Some nice tutorials :
nixcraft/howto-performance-benchmarks-a-web-server
petefreitag/Using Apache Bench for Simple Load Testing
Usage :
$ ab -k -n 1000 -c 100 www.yourserver.com
-k - keep alive
-n N - will send N requests to the server
-c X - will send X packets concurrently
How can I get a history of uptimes for my debian box? After a reboot, I dont see an option for the uptime command to print a history of uptimes. If it matters, I would like to use these uptimes for graphing a page in php to show my webservers uptime lengths between boots.
Update:
Not sure if it is based on a length of time or if last gets reset on reboot but I only get the most recent boot timestamp with the last command. last -x also does not return any further info. Sounds like a script is my best bet.
Update:
Uptimed is the information I am looking for, not sure how to grep that info in code. Managing my own script for a db sounds like the best fit for an application.
Install uptimed. It does exactly what you want.
Edit:
You can apparantly include it in a PHP page as easily as this:
<? system("/usr/local/bin/uprecords -a -B"); ?>
Examples
the last command will give you the reboot times of the system. You could take the difference between each successive reboot and that should give the uptime of the machine.
update
1800 INFORMATION answer is a better solution.
You could create a simple script which runs uptime and dumps it to a file.
uptime >> uptime.log
Then set up a cron job for it.
Try this out:
last | grep reboot
according to last manual page:
The pseudo user reboot logs in each time the system is rebooted.
Thus last reboot will show a log of all reboots since the log file
was created.
so last column of #last reboot command gives you uptime history:
#last reboot
reboot system boot **************** Sat Sep 21 03:31 - 08:27 (1+04:56)
reboot system boot **************** Wed Aug 7 07:08 - 08:27 (46+01:19)
This isn't stored between boots, but The Uptimes Project is a third-party option to track it, with software for a range of platforms.
Another tool available on Debian is uptimed which tracks uptimes between boots.
I would create a cron job to run at the required resolution (say 10 minutes) by entering the following [on one single line - I've just separated it for formatting purposes] in your crontab (cron -l to list, cron -e to edit).
0,10,20,30,40,50 * * * *
/bin/echo $(/bin/date +\%Y-\%m-\%d) $(/usr/bin/uptime)
>>/tmp/uptime.hist 2>&1
This appends the date, time and uptime to the uptime.hist file every ten minutes while the machine is running. You can then examine this file manually to figure out the information or write a script to process it as you see fit.
Whenever the uptime reduces, there's been a reboot since the previous record. When there are large gaps between lines (i.e., more than the expected ten minutes), the machine's been down during that time.
This information is not normally saved. However, you can sign up for an online service that will do this for you. You just install a client that will send your uptime to the server every 5 minutes and the site will present you with a graph of your uptimes:
http://uptimes-project.org/
i dont think this information is saved between reboots.
if shutting down properly you could run a command on shutdown that saves the uptime, that way you could read it back after booting back up.
Or you can use tuptime https://sourceforge.net/projects/tuptime/ for a total uptime time.
You can use tuptime, a simple command for report the total uptime in linux keeping it betwwen reboots.
http://sourceforge.net/projects/tuptime/
Since I haven't found an answer here that would help retroactively, maybe this will help someone.
kern.log (depending on your distribution) should log a timestamp.
It will be something like:
2019-01-28T06:25:25.459477+00:00 someserver kernel: [44114473.614361] somemessage
"44114473.614361" represents seconds since last boot, from that you can calculate the uptime without having to install anything.
Nagios can make even very beautiful diagrams about this.
Use Syslog
For anyone coming here searching for their past uptime.
The solution of #1800_Information is a good advise for the future, but I needed to find information for my past uptimes on a specific date.
Therefore I used syslog to determine when that day the system was started (first log entry of that day) and when the system was shutdown again.
Boot time
To get the system start time grep for the month and day and show only the first lines:
sudo grep "May 28" /var/log/syslog* | head
Shutdown time
To get the system shutdown time grep for the month and day and show only the last few lines:
sudo grep "May 28" /var/log/syslog* | tail