I'm trying to extract one field from Firefox to my local Ubuntu 12.04 PC and Mac OS 19.7.4 from an online form
I can manually save the page locally as a text document and then search for the text using Unix script but this seems rather cumbersome & I require it to be automated. Is there another more efficient method?
My background is on Macs but the company is trialling Linux PC's, so please be tolerant of my relevant Ubuntu ignorance.
If you mean to program something try
WWW:Mechanize library, it have python and perl bindings,
several mousescripting engines in lunux, (actionaz)
test automation tool which works with firefox (Selenium)
You can do it by simple BASH script.
Take a look at some useful stuff like:
wget
sed
grep
and then nothing will by cumbersome and everything can go automatic.
If you want to go with the method that you mentioned, you can use curl to automate the saving of the form. Your BASH script would then look something like this:
curl http://locationofonlineform.com -o tempfile
valueOfField=$(grep patternToFindField tempfile)
// Do stuff
echo $valueOfField
If you want to get rid of the temporary file, you can directly feed the result of curl into the grep command.
Related
I am a Linux newbie and I often find myself working with a bunch of random data.
For example: I would like to work on a sample text file to try out some regular expressions or read some data into gnuplot from some sample data in a csv file or something.
I normally do this by copying and pasting passages from the internet but I was wondering if there exists some combination of commands that would allow me to do this without having to leave the terminal. I was thinking about using something like the curl command but I dont exactly know how it works...
To my knowledge there are websites that host content. I would simply like to access them and store them in my computer.
In conclusion and as a concrete example, how would i copy and paste a random passage off the internet from a website and store it in a file in my system using only the command line? Maybe you can point me in the right direction. Thanks.
You could redirect the output of a curl command into a file e.g.
curl https://run.mocky.io/v3/5f03b1ef-783f-439d-b8c5-bc5ad906cb14 > data-output
Note that I've mocked data in Mocky which is a nice website for quickly mocking an API.
I normally use "Project Gutenberg" which has 60,000+ books freely downloadable online.
So, if I want the full text of "Peter Pan and Wendy" by J.M. Barrie, I'd do:
curl "http://www.gutenberg.org/files/16/16-0.txt" > PeterPan.txt
If you look at the page for that book, you can see how to get it as HTML, plain text, ePUB or UTF-8.
this is just for learning purpose. (don't consider inotify)
what if we want to develop a bash shell script which compare file list of previous run and current run, when ever we run the script manually and email file name file size time of new files only.
The best that I can suggest is to find the tools that you need to do you specific work.
e.g. ls -l combined with awk, use mail or any other mailing tool, etc.
The idea is to use standard tools to accomplish your mission.
Don't compile your own code, just use standard tools in your script. Most of the things that you need are already there.
The task I am trying to achieve is to write a script which accesses a Red Hat Server, navigates to a certain directory, and adds things to a text file. How would I go about this task? what scripting language do I use? etc.
I don't have any experience in scripting languages, I'm only really an expert in Java applications and occasional C#.
Hope somebody can help. This would be extremely useful to me.
If you're just trying to append a line, you can use SSH (for the connection) and just concatenate to the end of the file like so:
echo "New line to text file" | ssh myserver.com 'cat >> /var/myfile.txt'
If you're trying to change the contents, then you'll need to download the file before running it through a utility such as sed or awk and then uploading it back to the server. scp can be used to securely download and upload the file, but describing sed or awk here is beyond the scope of a brief answer.
I am writing a script on Red Hat Linux (I forget the version) that needs a header, but the banner command is not there for me to use and I won't be able to get it installed. I read via Google that it may well have been deprecated.
So is there a new version of the command that produces similar results, or a way I can replicate the command, or even just temporarily change the script output so that characters are a different size?
I've tried looking at stty but we don't access via xterm, we log in directly via putty.
In its simplest form, 'banner' is less than a few pages of code (e.g. this one). Perhaps you could just compile and run it from your home directory?
Use some web site, for example http://patorjk.com/software/taag/.
If you need it frequently you can create a script to scrap the result.
BTW, stty has nothing to do with your problem, I don't know why you mentioned it.
I want to create an image what a web page looks like,
e.g. create a small thumbnail of the html + images.
it does not have to be perfect (e.g. flash /javascript rendering).
I will call use the code on linux, ideally would be some java library, but a command line tool would be cool as well.
any ideas?
Try CutyCapt, a command-line utility. It uses Webkit for rendering and outputs in various formats (SVG, PNG, etc.).
you can get it nearly perfect, and cross platform too, by using a browser plugin.
FireShot or ScreenGrab for Firefox.
6 Google Chrome Screenshot Webpage Capture Extensions
BrowserShots is an open source project that may have some code you can use.
also see:
Command line program to create website screenshots (on Linux)
Convert web page to image
How to take screenshot of whole web page, rather than what shows on the screen
What is the best way to create a web page thumbnail?
Convert HTML to an image
To take a screenshot in the terminal with ImageMagick, type the following line into a terminal and then click-and-drag the mouse over a section of the screen:
import MyScreenshot.png
To capture the entire screen and after some delay and resize it, use the following command:
import -window root -resize 400×300 -delay 200 screenshot.png
You may use a mixture of xwininfo and import to retrieve the window id of the browser and make a screenshot of that window. A bash script to automate this process would be something like this:
#!/bin/bash
window_id=`xwininfo -tree -root | grep Mozilla | awk '{print $1}'`
import -window $window_id -resize 100x100 tumb.png
This script will create a 100x100 screenshot of Firefox on the current directory under the name tumb.png
Several sources show how to run a bash script from inside a Java application, google can help you on that. If you are in a hurry, check this and this.
After reading this page, I was thinking, let me fire up midori browser: http://midori-browser.org/ and when I tried the -h option, I have seen:
-s, --snapshot Take a snapshot of the specified URI
QutyCapt is difficult to compile, and has many dependencies. Midori has it less. It outputs the PNG of the website into TMP folder. One can get the file with:
midori -s http://www.rcdwealth.com new.png 2>/dev/null | awk '{ print $4}'
After that, the file can be converted to thumbnail size by using ImageMagick's convert program.
If you're interested in Java, maybe you could look at browser automation using Selenium-RC http://seleniumhq.com
It's a little java server that you can install on the box and the program itself will execute remote commands in a web browser.
Steps like (this is pseudo code by the way, I code my Selenium in php and I can't recall 100% of the specifics off the top of my head)
selenium.location("http://foo.com")
selenium.open("/folder/sub/bar.html")
selenium.captureScreenshot("/tmp/" + this.getClass().getName() + "."
+ testMethodName + ".png");
Actually, I just did a quick websearch for the exact syntax on that last one ... and this guy has a blog with what might actually be working code in java :)
https://dev.youdevise.com/YDBlog/index.php?title=capture_screenshots_of_selenium_browser_&more=1&c=1&tb=1&pb=1
There's also a number of websites that provide this service "cross browser and OS" I just can't recall what they are. Basically they've got a cloud of every single operating system and browser combination, and they log on with each machine, take a screen and store it on their site for you to come back to in a few hours when they're done.
Ahh... another websearch and it's yours :) http://browsershots.org/