DSpace - OAI-PMH - Pagination - pagination

I would like to illustrate a problem with the pagination of the results of the OAI-PMH Data Provider of DSpace.
Let's assume that I display the first page of results of a specific type, in the context the Identifiers.
The URL of the request ends as shown here:
".../oaidl.jsp?verb=&ListIdentifiers&metadataPrefix=pico"
Until here, no problem. However, when selecting the Show More button, using exactly the Resumption Token indicated at the end of the first request response:
".../oaidl.jsp?verb=ListIdentifiers&resumptionToken=9999-12-31|1753-01-01|null|pico|982|2019-03-02T14%3A29%3A11Z"
I get an error page with the following message:
"XML interpretation error: no root element found..."
Can you tell me my mistake and, if possible, how can I solve it?
Thank you very much in advance!

Which version of DSpace are you running? The reference implementation at demo.dspace.org has a different url pattern than your example.
http://demo.dspace.org/oai/request?verb=Identify

Related

How to get a more specific "why" of the error?

In my Node backend, I am using Express-validator to do some request body checks. For some reason if I include the code to check one of the form values, it will always give me an error "Invalid Value" (the default error message).
My question is: Is there any way to get more specifics about the error? From what I know about it, all I can get is which one it is throwing the error on, what the value is, and what the error is. But I can't find why the value is invalid... and I can't really fix it, if I don't know why.
What should I do? I have looked at the API documentation for express-validator, but haven't found anything that might help.
Thanks to Heiko Theißen's comment, I went to check/test my validation again, and eventually figured out that the problem was my custom validator, I wasn't returning the [collection].findById() and so when I return the value in there, it didn't work.
So if you are having this problem, please check custom validators, and add the .withMessage() to other validators.
If you think your custom validators are all correct and you still get the error, comment out your custom validators (and add a built in one if necessary).
If you still get the error, you now know that it has something to do with your custom validator.

Python selenium "Timeout Exception" error

I am trying to click on the Financials link of the following URL using Selenium and Python.
https://www.marketscreener.com/DOLLAR-GENERAL-CORPORATIO-5699818/
initially I used the following code
link = driver.find_element_by_link_text('Financials')
link.click()
Sometimes this works and sometimes it doesn't and I get the Element is not Clickable at point (X,Y) error. I have added code to maximise the webpage in case the link was getting overlapped by something.
It seems that the error is because the webpage doesn't always load in time. To overcome this I have been trying to use expected conditions and wait libraries of Selenium.
I came up with the following but it isn't working, I just get TimeoutException.
link = wait(driver, 60).until(EC.element_to_be_clickable((By.XPATH,'//*[#id="zbCenter"]/div/span/table[3]/tbody/tr/td/table/tbody/tr/td[8]/nobr/a/b')))
link.click()
I think XPATH is probably the best choice here or perhaps class name, but there is no ID. I'm not sure if its because the link is inside some table that it isn't working, but it seems odd to me that sometimes it works without having to wait at all.
I have tried Jacob's approach. The problem is I want it to be dynamic so it will work for other companies. Also, when I first land on the summary page the URL has other things at the end so I can't just append /financials to the URL.
This is the URL it gives me: https://www.marketscreener.com/DOLLAR-GENERAL-CORPORATIO-5699818/?type_recherche=rapide&mots=DG
I might have find a way around this:
link = driver.current_url.split('?')[0]
How do I then access this list item and append the string 'financial/' to the list item?
I was looking for a solution when I noticed that clicking on the financial tab takes you to a new URL. In this case I think the simplest solution is just to use the .get() method for that URL.
i.e.
driver.get('https://www.marketscreener.com/DOLLAR-GENERAL-CORPORATIO-5699818/financials/')
This will always take you directly to the financial page! Hope this helps.

Selenium webdriver is showing NoSuchElementException even when element is already present on the webpage

I am using Selenium with Python for Automation of a website for my organisation. I am using Mozilla FireFox Web Browser for this purpose. Firefox version is 72.0.1 (64 bit).
I have already read all answers similar to my problem available in stack overflow but not able to get solution of my problem
I am fetching some data from webpage in my Python program. My Program is able to fetch data related to maximum required fields from that webpage but not able to fetch data of a particular field.
Manually also if I am trying to copy this field from firefox Browser I am unable to do so. But Manually when I am trying to copy the same field through Internet Explorer than I am able to do so.
I have tried both implicit and explicit wait also but timeout exception is raised in that case. Following is the code to fetch this particular field-
community_name=driver.find_element_by_xpath("//input[#id='BNAZZZWUUGKEZQXF44ZDZMNEC5W1SZZZ__0___OLD']")
community_name = community_name.get_attribute("value")
print(community_name)
#If I am applying wait here than timeout exception is raised
Following is the HTML code of this particular field-
<INPUT TYPE='hidden' NAME='BNAZZZWUUGKEZQXF44ZDZMNEC5W1SZZZ__0___OLD' ID='BNAZZZWUUGKEZQXF44ZDZMNEC5W1SZZZ__0___OLD' VALUE="bhpb-ean"><input id='BNAZZZWUUGKEZQXF44ZDZMNEC5W1SZZZ__0___OLD_text' name='BNAZZZWUUGKEZQXF44ZDZMNEC5W1SZZZ__0___OLD_text' readonly='' disabled='' value='bhpb-ean'/></td>
I have to copy bhpb-ean value from web-browser
I think this can be problem related to Firefox web-browser.There are some other fields also where similar type of issue is raised for getting the value.I have to use Mozilla web-browser only.
Hope I am clear. Can you help me to solve my problem?
Thanks in Advance
EDIT: I have used correct syntax of xpath in my program but my mistake forget to write // in my question in the syntax of my xpath. Hence I do not get solution of my problem. Can anybody help me.
To look at your code trail The xpath syntax you have provided is wrong.Change the xpath syntax and check.
community_name=driver.find_element_by_xpath("//input[#id='BNAZZZWUUGKEZQXF44ZDZMNEC5W1SZZZ__0___OLD']")
community_name = community_name.get_attribute("value")
print(community_name)

Error with GET request URL, is it a URL issue or the returned info?

I'm trying to retrieve some stuff from a server (can't really go into much detail), but I've run into an issue which is solved by commenting out some stuff in a string being used as the place to hit.
The situation is as follows:
The URL I want to hit is
http://example.com/api/statistics/installations?version=1.0&type=prod
I get errors with this (based on the stuff being returned not being as expected), however using this works:
http://example.com/api/statistics/installations
Just without the refining flags of version and type.
Now, I'm new to working alongside servers, so I was wondering if perhaps the first URL is malformed towards the end or something?
Thanks to anyone who answers, even if it's just clarification the URL is correct at least I know where the problem lies afterward.
The URL is fine in that format, the issue was on the other end.

JQGrid, Search related issue

Is there anyway to validate the search field (I'm using custom search) before sending request to server? Validation is working fine with row editing and adding mode. Let say I want to search column price and error message should occur when user enters a text in search field.
If search returns no data, I want to post a message on the screen. I see no events in search function that can get the server response. The onClose event happened when the search box is closed, but I don't know how to get the server response from this?
Another question, I've tried to use gridResize but it's not working, everything else is working just fine, I see no resize icon in bottom right corner. Please take a look at my code below:
jQuery("#list").jqGrid('gridResize',{minWidth:350,maxWidth:800,minHeight:80,
maxHeight:350});
The part of your question about the validation of the custom searching seems be the same which I answerd here. The answer include the demo where the validation of the 'Client' field is included.
How you can see, the custom searching is moved in the grid.addons.js module in the 4.0.0 version of jqGrid, so it can be removed in some later versions of jqGrid.
There are no special searching request to the server. There are exist just the standard request to fill the grid, where the _search parameter (corresponds to search parameter of jqGrid) are set to true and some other parameters like filters describe the filter criteria. So you can use emptyrecords parameter of jqGrid (see here). You can follow the demo (see the answer) which shows the message in the grid body.
You problems with gridResize seams me very easy. i suppose, that you either not included jQuery UI JavaScript (including of CSS only is not enough) or you placed call of gridResize in the wrong place. You don't posted the JavaScript code and the HTML code which could shows which JavaScripts files you have loaded and in which order. So I can not answer more exactly.

Resources