Xpage picture paste error - xpages

I am getting this error when pasting a picture in Xpage rich text editor and save the document; this mainly happens when picture size is big or resolution is good. Please let me know if there is any solution for the same?
Error while executing active content filter Exception in processing active content:
Exception in processing active content:
Illegal state: 62 (>) Exception in processing active content:
Illegal state: 62 (>)

If you use the insert functionallity of CKEditor, the image is uploaded to the server first and then referenced in the CKEditor. But when pasting an image, it is encoded as a base64 string and added directly as a HTML image element:
<img alt="" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAADIAAAASXCAIAAACs2nJrAAAgAElEQVR4nOzdd3RVdb738fn7mZnnqjOOM6MztrFRLCjFQpFepRelKUhHWigJJQmBhE5IgCSk0EILNR1CEqpYriMlCc2uBPTe50raSSIK3uePvc+uv10UPBuP7"/>
As long as the image is small enough, the Active Content Filter (ACF) is able to parse the content of the CKEditor (HTML code), but as soon the pasted image is too large, the parser crashes.
Please try to disable the content filter by adding the htmlFilter properties set to identify:
<xp:inputRichText
id="inputRichText1"
value="#{document1.Body}"
htmlFilterIn="identify"
htmlFilter="identity">
</xp:inputRichText>
Hope this helps!
EDIT:
This would allow users to embed "malicious" HTML code.

If it's caused by the file size, this may be getting restricted by the "Maximum size of request content" in the HTTP Protocol Limits section on the Internet Protocols > HTTP tab of the server document. You may also need to change the "Maximum POST data (in
kilobytes):" setting in the POST Data section of the Internet Protocols > Domino Web Engine tab.

Related

How to extract open graph meta data from a webpage using UIPath RPA?

Learning RPA with UIPath. Happily extracting onscreen data from a website, processing it, using it, etc.
However, there's information in the page that isn't visible, but is in the source, eg, open graph meta tags:
<meta property="og:image" content="https://example.com/foo.jpg" />
What options are open to me to extract this with UIPath? I gather there's an ExtractMetaData flag from ExtractData but I've yet to find a useful tutorial that I can follow at this stage :/
You can try and use Data Scraping option by selecting the respected option from the Wizards Tab as show below:
Now you need to indicate on the screen the area of data that you need to scrape, like:
Structure data in form of table
Specific element on the web page
Or the whole window
Data scraping activity generates a container (Attach Browser or Attach Window) with a selector for the top-level window and an Extract Structured Data activity with a partial selector as per images below:
So all you need to do is place your XML tag as Input under ExtractMetadata field as per image below:
Hope these information will be useful.

How to get large pictures in Google image

I want to collect pictures from Google image search. However, I am constantly notified with an error.
For example, the URL https://www.google.com/search?q=banana&hl=en&gws_rd=ssl&tbm=isch is fine in my browser, but in web harvest it reports that: the reference to entity "gws_rd" must end with the ';' delimiter.
I guess '&' is a special character in webharvest, but I cannot find information about it. Can you figure out why?
This is the code:
<var-def name="search" overwrite="false">banana</var-def>
<var-def name="url"><template>http://images.google.com/images?q=${search}&hl=en</template></var-def>
<var-def name="xml">
<html-to-xml>
<http url="${url}"/>
</html-to-xml>
</var-def>
<var-def name="largeImgUrl">
<xpath expression="//*[#id='irc_cc']/div[4]/div[1]/div/div[2]/div[1]/a/img">
<var name="xml"/>
</xpath>
</var-def>
from experience you will need to first store the url in a variable, and then refer to the variable from within the http processor call
EDIT
I notice you have pasted your code. Good.
1) remember that all the webharvest config files are written in XML, and amersand & is a special character in XML, as it is part of the entity declaration
In webharvest i normaly avoid this issue by using CDATA sections within <template> or <code> blocks.
2)when using webharvest graphical interface, you can easily debug your xpath expressions. Run your code as normal, and then on the toolbar at the top click the icon with a magniffying glass. Then choose "xml" (name of your variable you have set). This will open a new window, with a preview of your xml. Make sure the "view as" dropdown is set to xml.
You should now have a "xpath expression" box where you can test your xpath.
3)I strongly discourage from writing xpaths referring to numbered elements. (eg div[4]/div[1]/div/div[2]/div[1]/). Any small change in the underlying page usually breaks the code. It is much better to select elements based on id or other properties.

WKHTMLTOPDF Dynamic Header on every page

I am trying to produce a PDF file using WKHTMLTOPDF library in NODE for a large HTML file. I need to be able to stuff in some content in the Header and Footer on every page. But the content on the header changes on every page for e.g, have custom numbering in a format like BX008761. The number should increment on every page.
First page will be BX008761, second page BX008762, third BX008763 so on..
I could find a thread which is related..
WKHTMLTOPDF -- Is possible to display dynamic headers?
the above thread states:
"you can feed --header-html almost anything :) Try the following to see my point:
wkhtmltopdf.exe --margin-top 30mm --header-html isitchristmas.com google.fi x.pdf
So isitchristmas.com could be www.yoursite.com/magical/ponies.php"
does the source value provided for --header-html option be called for every page of the PDF rendered or it is called just once for every PDF..?
Appreciate your support.Thank you.
EDIT : I have tried a sample program and confirmed that it will process the value provided for --header-html option on every page rendered with in PDF. I am using a remote service to return the HTML string as a response to the url.
Now it is displaying the html string as is, instead of decoding it.
when the service returns below string:
<html> <body> <span style="color:red" > 123 :: 0 :: 3000025 :: 634943551338828720</span> <body> <html>
then the header on every page is also same as above instead of displaying the text in red color. how do i make the wkhtmltohtml understand that the content it received from service need to be decoded.
appreciate if any one can suggest a workaround.
Thank you.
EDIT : I have used another work around to return a HTML page for the header content. I used essentially a HTTPHandler in asp.net to return a valid response and the issue looks to have addressed the core issue of having a dynamic header on every page.

How to display an pdf once uploaded with jsf

I have created a file upload function which saves all the uploads to a certain place:
private String destination = "D:/Documents/NetBeansProjects/FileUpload/uploadedDocuments/";
Is this a good place to store it? Should I store it some where else?
Is it possible that once the upload is complete for a page to be displayed showing the user what they have just uploaded, like a box below showing a preview? And how would I go about doing this? I am new.
I have figured it out how to display a plain txt file and an image, however it is the pdf that is confusing me.
As to the upload location, which seems to be the IDE project folder, that's absolutely not right. You should choose a configureable and fixed location somewhere outside the IDE project folder. You should not rely on using getRealPath() or relative paths in java.io.File. It would make your webapp unportable. See also this related question: Uploaded image only available after refreshing the page.
Whatever way you choose based on the information provided in the answer of the aforementioned question (and all links therein), you should ultimately end up having a valid URL pointing to the PDF file in question such as http://example.com/files/some.pdf.
Then, you can serve the PDF file on a webpage using either an <iframe>:
<iframe src="/files/some.pdf" width="600" height="400"></iframe>
Or an <object>:
<object data="/files/some.pdf" type="application/pdf" width="600" height="400">
some.pdf <!-- This link will only show up when the browser doesn't support showing PDF. -->
</object>
Keep in mind that showing PDFs in a browser is only supported in browsers having Adobe Reader plugin. Note that the <object> approach will gracefully degrade to a link when the browser doesn't support displaying application/pdf content.

Why doesn't Windows Explorer display OpenSearch results with types other than text/html correctly?

I'm trying to display results from the REST API of IBM Content Analytics in Windows 7 Explorer through its OpenSearch integration. The REST API returns an Atom feed with an <atom:entry> element for each search hit.
My problem is: As soon as the type attribute of an element's <atom:link> has a value other than text/html, the respective search hit in Windows explorer is displayed as "No information available". In the below minimal example, the search hit is displayed correctly as soon as you remove the type="application/msword" or change its value to text/html.
<atom:entry>
<atom:title>Hit B</atom:title>
<atom:link rel="alternate" type="application/msword" href="http://192.168.111.130:8394/api/v10/document/content?collection=Search&uri=file:///C:/DataFiles/Price%2BChange.doc" hreflang="en"/>
<atom:id>file:///C:/DataFiles/Price+Change.doc</atom:id>
<atom:summary>...B</atom:summary>
</atom:entry>
Can anyone explain this behaviour or say how to avoid it and display non-text/html results in Windows Explorer?
Documentation seems scarce, most of the stuff I found was in the two documents linked below, but I didn't find anything on this issue there.
http://msdn.microsoft.com/en-us/library/windows/desktop/dd940453%28v=vs.85%29.aspx
http://windowsteamblog.com/windows/b/developers/archive/2010/04/18/windows-7-federated-search.aspx
In RSS, the link tag needs to contain a URL e.g.
<link>http://www.google.com</link>.
Then the title, date, etc. are displayed in the file manager.
Instead
<link href="http://www.google.com" />
would show "No information available" in the file manager.
For ATOM, you need to include:
<link href="http://www.google.com" />
without any "rel" attribute. Then you see the correct information in the file manager!

Resources