ASCII Text Browser Vs. cURL - linux

I hit this url : http://artii.herokuapp.com/make?text=abc+art&font=smisome1
I see this
Then when I curl exact same URL
curl http://artii.herokuapp.com/make?text=abc+art&font=smisome1
I got this
🤦🏻‍♂️
Does anyone know why this is happening ?
Is there a specific flag in the curl I should pass to make it return the same result as broswer ?
How do I get my Terminal to display the same ASCII text format as the browser ?

The ampersand in the URL is breaking up the command.
If you escape the ampersand you should achieved the desired return.
E.G.
curl http://artii.herokuapp.com/make?text=abc+art\&font=smisome1

In browser you using &font=smisome1 (look on end of your url) in terminal you don't using this font because of &.
Maybe this help you. Sorry for not professional answer.

Related

Url command with utf-8 url

I was trying to download a URL from the following address:
http://data.riksdagen.se/personlista/?utformat=json&valkrets=Värmlands+Län
(Open Data from the Swedish government)
This works perfectly in the browser but using the url command in LiveCode doesn't quite as the Swedish character ä doesn't get encoded properly. I've tried to urlEncode the string but it still doesn't work. Is there any way to download a url with utf-8 encoded characters.
If I call curl via shell I do get the correct values, but that isn't available on the mobile...
After some thinking and digging I realised that the answer is of course to translate the url from UTF-16 that LiveCode uses internally into UTF-8 that the server expects. The browsers use UTF-8 by default so thats why it's working there. So
put url "http://data.riksdagen.se/personlista/?utformat=json&valkrets=" & textEncode("Värmlands+Län", "utf8")
did the trick!
The problem is that I can't use the urlencodefunction as that translates all Swedish characters and the server expects them as UTF-8 (which is of course strange by itself!)

How to handle special characters in a wget download link?

I have a link like this:
wget --user=user_nm --http-password=pass123 https://site.domain.com/Folder/Folder/page.php?link=/Folder/Folder/Csv.Stock.php\&namefile=STOCK.Stock.csv
But while the password authorization is fine, wget still cannot process the link. Why?
The safest way when handling a link from e.g. a browser is to use single quotes (') to quote the whole link string. That way the shell will not try to break it up, without you having to manually escape each special character:
wget --user=user_nm --http-password=pass123 'https://site.domain.com/Folder/Folder/page.php?link=/Folder/Folder/Csv.Stock.php&namefile=STOCK.Stock.csv'
Or, for a real example:
wget --user-agent=firefox 'https://www.google.com/search?q=bash+shell+singl+quote&ie=utf-8&oe=utf-8&aq=t&rls=org.mageia:en-US:official&client=firefox-a#q=bash+single+quote&rls=org.mageia:en-US:official'
Keep in mind that server-side restrictions might make using wget like this quite hard. Google, for example, forbids certain user agent strings, hence the --user-agent option above. Other servers use cookies to maintain session information and simply feeding a link to wget will not work. YMMV.

Trying to extract field from browser page

I'm trying to extract one field from Firefox to my local Ubuntu 12.04 PC and Mac OS 19.7.4 from an online form
I can manually save the page locally as a text document and then search for the text using Unix script but this seems rather cumbersome & I require it to be automated. Is there another more efficient method?
My background is on Macs but the company is trialling Linux PC's, so please be tolerant of my relevant Ubuntu ignorance.
If you mean to program something try
WWW:Mechanize library, it have python and perl bindings,
several mousescripting engines in lunux, (actionaz)
test automation tool which works with firefox (Selenium)
You can do it by simple BASH script.
Take a look at some useful stuff like:
wget
sed
grep
and then nothing will by cumbersome and everything can go automatic.
If you want to go with the method that you mentioned, you can use curl to automate the saving of the form. Your BASH script would then look something like this:
curl http://locationofonlineform.com -o tempfile
valueOfField=$(grep patternToFindField tempfile)
// Do stuff
echo $valueOfField
If you want to get rid of the temporary file, you can directly feed the result of curl into the grep command.

Using ' in Shellscript (wget)

I'm trying to get wget to work with a post-request and a special password. It contains ' and it's like this:
wget --save-cookie cookie.txt --post-data "user=Abraham&password=--my'precious!" http://localhost/login.php
But when I use the tick in with wget I get strange errors. Does anybody know how to get it to work?
The backtick in your request is a straightforward issue, although you may have a second one lurking in there.
The word you are looking for is 'escape' - the backtick has a special meaning on the commandline and you need to escape it so that it is not interpreted as such. In the bash shell (typical linux console) the escape character is \ - if you put that in front of the backtick, it will no longer get interpreted.
The second potential issue is with the way you are using wget - are you certain that is the request you are meant to send? Are you trying to authenticate with the server using a web form or with Basic, Digest or some other form of HTTP authentication?
If this is the manner in which you should be authenticating, then you will also need to percent encode the --post-data as wget will not do this for you.

Download a file with machine-readable progress output

I need a (linux) program that can download from a HTTP (or optionally FTP) source, and output its progress to the terminal, in a machine-readable form.
What I mean by this is I would like it to NOT use a progress bar, but output progress as a percentage (or other number), one line at a time.
As far as I know, both wget and curl don't support this.
Use wget. The percentage is already there.
PS. Also, this isn't strictly programming related..
Try to use curl with PipeViewer (http://www.ivarch.com/programs/quickref/pv.shtml).
Presumably you want another script or application to read the progress and do something with it, yes? If this is the case, then I'd suggest using libcurl in that application/script to do the downloading. You'll be able to easily process the progress and do whatever you want with it. This is far easier than trying to parse output from wget or curl.
The progress bar from curl and wget can be parsed, just ignore the bar itself and extract the % done, time left, data downloaded, and whatever metrics you want. The bar is overwritten using special control characters. When parsed by another application, you will see many \r's and \b's.

Resources