What do the different dates mean in XML returned by IBM Connections 4.0? - ibm-connections

The XML returned from direct REST calls to Connections 4.0 returns dates like so, from a File:
<published>2013-08-06T15:00:08.390Z</published>
<updated>2013-08-15T15:30:20.367Z</updated>
<td:created>2013-08-06T15:00:08.390Z</td:created>
<td:modified>2013-08-15T13:16:59.151Z</td:modified>
<td:lastAccessed></td:lastAccessed>
and from a File Comment:
<published>2013-08-08T18:04:44.949Z</published>
<updated>2013-08-08T18:05:39.566Z</updated>
<td:modified xmlns:td="urn:ibm.com/td">2013-08-08T18:05:39.566Z</td:modified>
<td:created xmlns:td="urn:ibm.com/td">2013-08-08T18:04:44.949Z</td:created>
The API documentation is vague about the conditions under which these dates are set:
<td:created> Creation timestamp in Atom format.
<td:modified> The date that the comment was last updated. Timestamp in Atom format.
<updated> The date that the comment was last updated, as defined in the Atom specification.
<published> The date the comment was initially published, as defined in the Atom specification.
Can one assume that <published> == <td:created> and that <updated> == <td:modified>, as the data seems to indicate, or are there circumstances under which these dates would have different values? Does the answer to this question vary by application (Files, Blogs, etc.)?
Edit
<updated> and <published> are Atom-defined properties. The <td:...> ones are IBM's extensions.
Another way to ask my question might be, What descriptions or definitions would I use to explain each of these dates to a user?

Whilst td:created and published are generally identical, with the foremost exception of content created as a draft and later published, applications use td:modified and updated with slightly different semantics. In Wikis for instance updated reflects the time page contents or metadata last changed, while td:modified is only updated when page contents i.e. title or text are updated. I expect the API documentation to clarify these subtle details, if not please post comments and ask for improvements.

Related

Api Instagram, get New comments of any media (since parameter date or id)

How can I get recent comments in my media files from a date or id?
That is, I am interested in getting all the comments of any media file that have been published since a date or a comment id that I already have.
From what I've seen I should go through each media and go through all the comments it contains to find the new ones.
But this way does not seem right to me since I would have to make many calls going through the media and its comments again and again until I find a new one.
I have also seen that the subscriptions are not yet active for the media or comments, which would be great because it is what I seek to get the new comments, instead of having to go all the way through the same in search of some new.
Is there any way to do it?
There is no API currently to automatically get new comments from a bunch of media.
You have to get new comments from single media and compare manually for dates with code using this API:
https://api.instagram.com/v1/media/{media-id}/comments?access_token=ACCESS-TOKEN
Also note that this API only returns the latest 150 comments, so if the media is very popular and gets more that 150 comments, you have to time the API call at regular intervals and check dates so you dont miss out on some new comments

xpages display the document history

In some 'old' lotus notes applications, we created a history of each document: who created the doc, every person which edited it + the respective dates. The code contained several libraries in lotusScript.
For xpages, is there any snippet / sample working example which I could use? I found this but I couldn't download any example ...
ValueChangeListeners allow you to capture changes to specific components. I've used them to create audit trails in customer applications before.
Tony McGuckin has an XSnippet for it:
http://openntf.org/XSnippets.nsf/snippet.xsp?id=server-side-value-change-events-listeners
Declan Lynch covered it in a blog post:
http://www.qtzar.com/using-a-valuechangelistener-to-build-an-audit-trail/
Don McNally has also done a blog post:
http://dmcnally.blogspot.co.uk/2013/02/xpages-detecting-and-logging-field.html
I don't know of any pre-done snippet yet. But this becomes a lot easier in XPages especially if expand into Java. When I create an application these days I basically convert the document to a Java object. I don't do this yet but it would be easy to store in the object a Map of all the fields and their current values and then on save, look for differences and then write them out to a log document.
this could be done without java of course. Create an map object in scope. Populate it on loading of the document and on save do the compare and write.
Something went wrong with that project on OpenNTF (don't ever use an ampersand in the name). I'm the original author of that custom control. AFter some digging I found a direct url to the project here.

Search Algorithm for a web application that needs to look for a specific value

I'm developing a webapp that will need to download the html form a website and then iterate through the code and try to find a specific but ever changing value (in our case it will be the price for the product).
For this, I was thinking about asking the user (upon installation and setup) to provide the system with a few lines of html from the page (that has the price) and then from then on, every time we need to fetch the price we would try to search for those lines and find the price.
Now, I believe this is a horrible and slow way of doing this and since there are no rules and the html can be totally different from one website to another (even the same website might change) I couldn't find a better way.
One improvement that I thought about was to iterate through the first time and record the line at which we find the code. Once found, the subsequent times we would then start from a few lines before the expected location and start the search. Any Thoughts on how I can improve on this?
I posted this question on https://cstheory.stackexchange.com/ but they commented that it's not on topic and that I should post it here.
I have the code for the above and if needed I can post it, I'm simply thinking that there must be a better, faster way of doing this.
This is actually something I tried for a project recently (using BeautifulSoup and Python). The solution that worked for me was to workout CSS selectors (which can map to jQuery selectors) that targeted the elements that contained the values I was looking for. In my case I was able to narrow down the full document to just the elements that contained what I was looking for but if you couldn't get exactly what you where after you could combine this with some extra lactic like test to see if it looks like a price (via regex) or test what it is next to.

Multilingual solution

Two questions, hopefully with similar answers.
I'll be releasing a JavaScript package in my solution where the error messages are to be displayed. The problems is that I'll be targeting German, English and French. Possibly, also a fourth language TBD. What would be the nicest way to resolve this?
The label names should definitely be localized. Is there a built-in approach to that in CRM 2011? Like a resource table or something like that?
My current solution for (1) is to keep an extra web resource with the strings and distributing a different file for each language. I may rebuild it and distribute all the languages at once and only use a parameter, possibly settable from the GUI if I create a settings-entity. A bit cumbersome.
My current solution for (2) involves a lot of praying and a divine act of some sort. :)
To determine current CRM user language dynamically from Javascript you can use window.USER_LANGUAGE_CODE (this variable exists on all CRM pages) - for example it will be equal 1033 for English. Than based on that info, you can pick needed string resources from your file.
Also in forms context there are two predefined functions, which return current Organization language code and current User language code: Xrm.Page.context.getOrgLcid() and
Xrm.Page.context.getUserLcid() .
If you are talking about custom entities and fields, you can easily add localized display names for them via your solution. You need to edit customizations.xml file from your unzipped solution. For each attribute there you will find such XML containing display names:
<displaynames>
<displayname description="Created By" languagecode="1033" />
</displaynames>
You can just add new display names for each language you need there.
P.S. If someone interested in different aspects of multilangual support for Dynamics CRM 2011 solutions, I strongly recommend to review this page, also here and here is a very helpful reading.

can an opengraph object be multi-page?

I've searched a lot for the answer to this question - but can't find one - possibly it is just too stupid, in which case please forgive me!
I want to add og metadata to our pages, but the information for each logical object (in our case a sports team or player) can be spread across multiple actual URLs (eg /team/, /team/players/, /team/results/ are all logically part of /team/).
Can I put the same opengraph metadata on multiple pages that represent the same object?
Or alternatively, can I specify the og:url as a regex, eg: /team/* ?
Or does /team/ imply /team/* for an og:url ?
Thanks very much for any clarification, Mike
the information for each logical object (in our case a sports team or player) can be spread across multiple actual URLs (eg /team/, /team/players/, /team/results/ are all logically part of /team/).
Do you mean all of these URLs contain the same information (they are just different points of access to that info) – or do you mean the info is spread in „bits and pieces” over these URLs (and a user would have to visit them all to get all the info)?
I’m not sure I understand your question/problem here – but maybe you’re just looking for what’s called a canonical URL …?
You could call in every subpage the Open Graph API passing the parent page as object; no need to put metadata also on subpages, if you are not interested in having subpages as separate objects. This way, a like clicked on an individual page is always given to the team. You might use a custom property to specify from where the click arrived (or even the ref property, maybe).
On a side note, I would not tell that results are logically part_of the team. Although for this specific usage, it does not matter.

Resources