Anchor Text issue in ebay listing template [closed] - ebay-design-templates

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
This may be simple question. But i did quite a bit research on Google, Stack-overflow & ebay for last 2 days before posting here.
Scenario
My company sells items via ebay. I am working on a new listing template (HTML page which describes about selling item).
Please see http://www.ebay.com.au/itm/4x-SAMSUNG-CLP-680DW-680DN-CLP680-CLT-506S-CLT506-506-CLTK-506-TONER-REFILLS-/380697552903
In the listing, blue color portion is the template created by me.
Problem
I have 5 Anchor links (PAYMENTS, POSTAGE & HANDLING, OUR LISTINGS, WARRANTY & RETURNS, CONTACT US) on top of the template, which takes customers quickly to corresponding sections on the listing.
Actually listing template itself is a html file, which we used to modify as per product spec and upload together with ebay listing. All works fine on me local PC.
Once uploaded in to ebay, Ebay amends with some wired URL and navigation doesn't work at all. In ebay, once i hoover over the link, it shows following link,
example for PAYMENT,
http://vi.raptor.ebaydesc.com.au/ws/eBayISAPI.dll?ViewItemDescV4&item=380697552903&t=1376555146000&tid=-1&category=16204&seller=tonerstop&excSoj=1&rptdesc=1&excTrk=1&lsite=15#pay
I can see # tag is appended correctly. But somehow its not functioning as expected.
Interesting Finding
Adding more complexity, this issue apparently not consistent too. Sometimes it works as well. In some situation it forms a complete different link and works. See example below.
example for PAYMENT,
http://www.ebay.com.au/itm/4x-SAMSUNG-CLP-680DW-680DN-CLP680-CLT-506S-CLT506-506-CLTK-506-TONER-REFILLS-/380697552903?#pay
But once you refresh the browser, it stops working again.
Tried Solutions
1.Rename anchor text assuming there may be other anchor text with same name. Didn't work.
2.Tried java script. But its not supported by ebay listing.
3.Looking for slimier template on ebay to see if those working. (Still looking)
Questions
Why its happening intermittently?
Is there any special ebay requirement for anchor texts?
Am i missing (of course) any thing here?
More Info
Issue still exists. Did check with 20+ different vendor's listing.
Had a long chat/email communication with ebay. But couldn't get passed to a developer other than customer support team.
So no choice for me other than to remove all the anchor menus.

I got the same problem and solved it by using another method:
onclick="document.getElementById('XXXXX').scrollIntoView(true); return false;" href="#XXX
etc. like a normal anchor.
Bye!

Related

Amazon KDP creator doesn't recognise table of contents from Google document [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed yesterday.
Improve this question
I'm trying to follow this advice here on how to format your TOC using a bookmark:
https://www.kdpcommunity.com/s/question/0D5f400000FHVFhCAP/google-docs-and-toc?language=en_US
But none of it works., My KDP pages still shows:
I do not believe the advice in that thread is accurate. If it is, I have been unable to replicate it. I have tried the following, none of which have worked:
Added Headings to the document and then auto-generating the ToC.
Additionally Bookmarking those headings
Changing the auto-generated ToC so it links to the Bookmarks rather than the headings.
Not using the auto-generated ToC and instead manually building one with links to Bookmarks
All of these will create a working table of contents which is not then recognised by KDP. I have not found any solution for getting a table of content generated in Google Docs (by any method) to show when upload to KDP. I assume this is because of the way Google Docs generates the Word doc which is required for KDP. The only workable solution appears to be to edit your document through a third party (either Word or some other package) once you've exported it from Google Docs.
I suspect there's a glitch in Amazon's programming. I have been uploading my books for years in Word with a manually created TOC that meets Smashwords' specifications (they're very picky). KDP always recognized the TOC until about a year ago. Suddenly even books that I've updated in a minor way and re-uploaded are marked as lacking a TOC. Yet when I opened their previewer, there's the TOC and I can click the links and they work fine. I appreciate that others have tried these multiple time-wasting attempts to fix the problem and that they don't work any better than my time-tasted (but no longer recognized) method.

How do I go about changing the contents of a page based on the URL? Express.js, Node.js

Okay. I know this question was a bit confusing, so let me decompose my question a bit further. For example, let's say I have the URL: https://example.com. I have an open GET endpoint at: https://example.com/user/* that will return a specific user's information based on what the contents of the "*" is. Lets say a specific user is at: https://example.com/user/12345. On an HTML page, I would like to put that user's profile contents and some of their hobbies. Again, this is theoretical. I have explored various solutions such as Handlebars.js which can dynamically change values based on the server request. However, this solution does not always work. Take a search engine for example at: https://mysearchengine.com/search?query=dogs. Here, we have a search query for dogs. How do I render all of the results to a HTML document without using a dynamic content module like Handlebars?
This question was particularly difficult to ask, so please do not mark this as "not enough information". I would be more than happy to clarify any questions you may have about the nature of my query. Thank you so much in advance,
Flight Dude.
Just wanted to let y'all know I found my answer: EJS. Thanks!

View. Show values as Links. Strange behaviour

Xpage (listPostits.xsp) has a "View" container control, where one of the column is set "show values in this column as links".
Now, here comes "Strange behaviour".
When i work with this application on my own (developer) PC (Win XP, Chrome or IE), the Domino generate the link, which can't be really processed:
/servername/db/postit/postit.nsf/listPostits.xsp/onePostit.xsp?documentId=many_numbers&action=editDocument
Namely, the Bold-marked portion shouldn't be there ! This portion is the name of the XPage, where the View control is in.
When i work with the application from other PC (Mac, Firefox) then i get the correct link (the same as above but without the XPage name inbetween):
/servername/db/postit/postit.nsf/onePostit.xsp?documentId=many_numbers&action=editDocument
update: let us leave for the moment the differencies in generated links between two machines. The first question is - why the extra portion is inserted into automatically generated link?
After playing around i think i might have found the reason for this strange behaviour. Namely, the "Substitution" Rules on the server side. One of them is to substitute "*/postit/all" with "/db/postit/postit.nsf/listPostits.xsp"
If i switch it off, then the Links are generated properly. Still, it's pretty strange to me that these settings influence the way Domino generates the links. I thought it works on the fly with them and those settings have nothing to do with the way how Links are generated inside the application.
So, the help now is needed regarding Web Site Rule Topic, but for that, i guess, i have to create another topic. But in case somebody has some good Info on this, please share it with me. I'm a bit confused at the moment :)
Final Update: Spent some more hours of testing and the results confirmed the initial idea.
If i open the page with the standart URL, i.e.
http://servername/db/postit/postit.nsf/listPostits.xsp then everything is fine, links are generated properly. When i however open the same page with short URL http://servername/postit/all , then server adds the substitute URL (db/postit/postit.nsf/listPostits.xsp) to every single link he generates automatically to be used as the link to open/edit the underlying document.
Is it bug or feature ? Don't know.
As a workaround (because i want to keep simple URL's for the application) i have to manually generate links.

problems testing sharepoint with selenium (timeouts, repeating auth and missed links)

I have some serious problems testing a sharepoint site with selenium/bromine. As I did't find an answer via various searches I hope someone here can point me in the right direction.
I am constantly getting timeouts opening the main page, but the server is definetly fast enough to answer the request and at 90% idle. Nevertheless I just get logs like these:
open http://username:passwd#10.13.110.54/default.aspx | Timed out after 90000ms
Test terminated The selenium server did not return OK
The auth popup is popping up at irregular intervals (every 5 to 10 clicks) although every open command uses the http://username:passwd#10.13.110.54/ as prefix
Clicking on elements is sometimes not registered, the logs show a successful
isElementPresent link=myLink
click link=myLink
but the browser doesn't react. These are mainly in-page links which open a new folder or an editing box.
I'm not sure whether I should have posted the in three separate questions, but I didn't want to spam.
Hope someone can help me, as I have these problems now for nearly 3 weeks.
Thanks in advance
Thomas
For your question number 2: Okay, this is a really late reply. I stumbled on this page looking for the answer myself. Given that I have solved it in the meantime, I figured I'd post my answer for other people stumbling onto this page.
General solution:
You need to create or use a profile that will let firefox automatically forward your credentials to the sharepoint website. You can create the profile manually and call it each time, see https://applicationtestingtips.wordpress.com/2009/12/21/seleniumrc-handle-windows-authentication-firefox/ for instructions.
Programmer solution: (works in python, should work similarly in Java)
Or you can create a new profile on the fly each time. I did that based on the information in the previously mentioned website. I use python for calling selenium, but this should be rather similar in whatever language you use to call selenium:
sharepointHosts = 'sharepoint1.mycompany.com,sharepoint2.mycompany.com' #have all your sharepoint hosts here in a comma-separated list
ffProfile.set_preference('network.automatic-ntlm-auth.trusted-uris', sharepointHosts)
ffProfile.set_preference('network.negotiate-auth.delegation-uris', sharepointHosts)
ffProfile.set_preference('network.negotiate-auth.trusted-uris', sharepointHosts)
driver = webdriver.Firefox(firefox_profile=ffProfile)

How to get a description of a URL

I have a list of URLs and am trying to collect their "descriptions." By description I mean what comes up, for example, if you Googled the link. For example, http://stackoverflow.com">Google: http://stackoverflow.com shows the description as
A language-independent collaboratively
edited question and answer site for
programmers. Questions and answers
displayed by user votes and tags.
This the data I'm trying to accumulate for the URLs I have.
I tried parsing the URL's meta-descriptions, however most of them are lacking a meta-description (yet Google and other search engines manage to get a description somehow).
Any ideas? Should I just "google" each link and scrape the data? I have a feeling Google wouldn't like this...
Thanks guys.
Different search engines have different algorithms to get the description out of the page if/when they are lacking the description meta tag. Some ignore the tag even it it's there.
If you want the description Google has, the most accurate way to get it would be to scrape it. Otherwise, you could write your own or look around on the web for code that does it.
These are called snippets.
Google use proprietary (and possibly patented) methods to garner this information, so there is no simple answer.
As you suggest, they will use meta-description information if it is there. (How to set the meta-information to help Google.)
They will also honour requests from the page authors to NOT include snippets. (How to prevent Google from displaying snippets) You should probably respect this too (as well as robots.txt, of course.)
You may have some luck with existing auto-summary packages, such as OTS.
You may want to check AboutUs.org (i.e. http://www.aboutus.org/StackOverflow.com).
But, there's little chance that the site will have an aboutus page and not have a meta description.
Some info that might explain how google does this:
Webmasters/Site owners Help
Adding a URL to google
I am not familiar with Google APIs, but perhaps there is an official way to get such information.
Interesting. some sources are better than others.
For "audiotuts.com" google has a worse description than AboutUs.com.
Google
Nov 18th in General by Joel Falconer ·
1. Recently, an AUDIOTUTS reader asked me about creative process. While this
is a topic that can’t be made into a
...
AboutUs.com:
AUDIOTUTS is a blog/tutorial site for
musicians, producers and audio
junkies! It is the sister site of the
popular PSDTUTS, VECTORTUTS and
NETTUTS.
I hate problems like these... they should be trivial but they aren't!
If you can assume English content, you can first look for Meta Description, and if that doesn't work, you can look for the first two or three sentence-like word sequences.
A product I worked on looked for the first P or DIV that contained more than one sequence of > n "words" delimited by periods. It would use the two or three sentence-like sequences, up to x total words, as a summary paragraph. It wasn't 100% accurate, but good enough for the average case. The number of words was adjusted a few times to eliminate things like navigation elements.

Resources