watir, cucumber, select a variable with string without using a case/when - cucumber

I have a cucumber step like this (it is not working, but it is a concept):
And (/^I navigate to this url '(.*?)'$/) do |web|
web = $ + 'web' + '_url'
#browser.goto web
end
In a different file, paths.rb, I have this hardcoded URL:
$google_url = http://www.google.com
I want to be able to select that URL, by doing something like this:
And I navigate to this url 'google'
Right now, I have not find a way of selecting the 'real content' of the variable $google_url. Any ideas?

Solution - Using eval (bad idea)
The quickest and most direct solution is to use eval to execute the string to get the variable. However, it is often suggested that eval is a bad idea.
And (/^I navigate to this url '(.*?)'$/) do |web|
url = eval('$' + web + '_url')
#browser.goto url
end
Solution - Create a module and use send
I think a better idea would be to create a module that includes all your path constants. This would be better because (1) you prevent polluting the space with many global variables and (2) using send is safer than eval.
In your paths.rb, you could replace your global variables with a module that can return the different urls:
module Paths
class << self
def google_url()
'http://www.google.com'
end
end
end
You would then use send in your steps to translate the string to a method:
And (/^I navigate to this url '(.*?)'$/) do |web|
url = Paths.send(web + '_url')
#browser.goto url
end

Related

How to scrape image/file from web page in Python?

I try to use Python3.7.4 to backup pictures in a blog site, e.g.
http://s2.sinaimg.cn/mw690/001H6t4Fzy7zgC0WLXb01&690
If I input the above address in Firefox address bar, the file is shown correctly.
If I use following code to download picture, server always redirects to a default picture:
from requests import get # just to try different methods
from urllib.request import urlopen
from urllib.parse import urlsplit, urlunsplit, quote
# hard-coded address is randomly selected for debug purpose.
origPict = 'http://s2.sinaimg.cn/mw690/001H6t4Fzy7zgC0WLXb01&690'
p = urlsplit (origPict)
newP = quote (p.path)
origPict = urlunsplit ([p.scheme, p.netloc, newP, p.query, p.fragment])
try:
#url_file = urlopen(origPict)
#u = url_file.geturl ()
url_file = get (origPict)
u = url_file.url
if u != origPict:
raise Exception ('Failed to get picture ' + origPict)
...
Any clue why requests.get or urllib.urlopen don't like '&' in url?
Updates: Thanks for Artur's comments, I realize the question is not on request itself, but on site protection mechanism: js or cookies or something else in webpage feedback something to server to allow it to judge if request comes from scraper. So now the question turns to how to scrape image from web page which is more complex than simply download image from url.
It's not about & symbol, but about redirection. Try adding parameter allow_redirects=False to get, it should be okay

Why URL module ignores characters after # in node.js

I am using url module, which basically splits a web address into readable part.
var data = url.parse(request.url).pathname
the output of request.url is C:\AppFolder\dropbox\videos\myVideo8#.MP4. After its get parsed, I dont understand why its not returing the value with "#.MP4"
I dont understand why its not returing the value with "#.MP4"
Because #.MP4 is the fragment and not the path component of the URL. (You can read up on URL syntax f.e. on WikiPedia, if you are not sure: https://en.wikipedia.org/wiki/URL#Syntax)
You want to look at hash, not pathname https://nodejs.org/docs/latest/api/url.html#url_url_hash

Python Requests: Use * as wildcard part of URL

Let's say I want to get a zip file (and then extract it) from a specific URL.
I want to be able to use a wildcard character in the URL like this:
https://www.urlgoeshere.domain/+*+-one.zip
instead of this:
https://www.urlgoeshere.domain/two-one.zip
Here's an example of the code I'm using (URL is contrived):
import requests, zipfile, io
year='2016'
monthnum='01'
month='Jan'
zip_file_url='https://www.urlgoeshere.domain/two-one.zip'
r = requests.get(zip_file_url, stream=True)
z = zipfile.ZipFile(io.BytesIO(r.content))
z.extractall()
Thanks in advance!
HTTP does not work that way. You must use the exact URL in order to request a page from the server.
I'm not sure if this helps you, but Flask has a feature that works similarly to what you require. Here's a working example:
#app.route('/categories/<int:category_id>')
def categoryDisplay(category_id):
''' Display a category's items
'''
# Get category and it's items via queries on session
category =session.query(Category).filter_by(id=category_id).one()
items = session.query(Item).filter_by(category_id=category.id)
# Display items using Flask HTML Templates
return render_template('category.html', category=category, items=items,
editItem=editItem, deleteItem=deleteItem, logged_in = check_logged_in())
the route decorator tells the web server to call that method when a url like */categories/(1/2/3/4/232...) is accessed. I'm not sure but I think you could do the same with the name of your zip as a String. See here (project.py) for more details.

How to use hook in WHMCS only on specific page?

If I want to display sidemenu only on affiliates page (affiliates.php) than I'm enclosing code in:
if (App::getCurrentFilename() == 'affiliates'){
}
But how to do same for page index.php?m=somepage ?
You can use the php builtin variables to get the URI, however if you want to use the application object to fetch a specific argument you could do something like:
$whmcs = App::self();
$module = $whmcs->get_req_var('m');

Determining if a link exists w/ Cucumber/Capybara

I want to verify that a link with a specific href exists on a page. I am currently doing I should see "/some-link-here" but that seems to fail. How can I make sure that link exists without having to do click + I should be on "/some-link-here" page?
You will need to add custom step
Then /^"([^\"]*)" should link to "([^\"]*)"(?: within "([^\"]*)")$/ do |link_text,
page_name, container|
with_scope(container) do
URI.parse(page.find_link(link_text)['href']).path.should == path_to(page_name)
end
end
You can use the step like Then "User Login" should link to "the user_login page", user_login is the name of your route
I used jatin's answer, but have a separate scoping step:
When /^(.*) within ([^:]+)$/ do |step, parent|
with_scope(parent) { When step }
end
Then /^"([^\"]*)" should link to "([^\"]*)"$/ do |link_text, page_name|
URI.parse(page.find_link(link_text)['href']).path.should == path_to(page_name)
end
Then I have this in my test:
step '"my foods" should link to "food_histories" within ".tabs"'
And this in my paths:
# note: lots not shown
def path_to(page_name)
case page_name
when /^food_histories$/
food_histories_path
end
end
This is what I have done myself, quite simple but it does mean you are hardcoding your url which to be honest is not ideal as it makes your test very brittle. Especially if you are using 3rd party URL's!
But if you are using a url that you manage and are happy to maintain this test then go for it.
Then /^the link is "(.*?)"$/ do |arg1|
page.should have_xpath("//a[#href='" + arg1 + "'][#target='_blank']")
end
I first landed here, when looking for a solution and thought I would give an updated answer. It depends what your capybara syntax is, but using the matcher has_link? you could write for href = /some-link-here and link_text "Click Me"
expect(page).to have_link("Click Me", href: '/some-link-here')

Resources