I think my question would be better explained with a couple of examples...
GET http://myservice/myresource/?name=xxx&country=xxxx&_page=3&_page_len=10&_order=name asc
that is, on the one hand I have conditions ( name=xxx&country=xxxx ) and on the other hand I have parameters affecting the query ( _page=3&_page_len=10&_order=name asc )
now, I though about using some special prefix ( "_" in thes case ) to avoid collisions between conditions and parameters ( what if my resourse has an "order" property? )
is there some standard way to handle these situations?
--
I found this example (just to pick one)
http://www.peej.co.uk/articles/restfully-delicious.html
GET http://del.icio.us/api/peej/bookmarks/?tag=mytag&dt=2009-05-30&start=1&end=2
but in this case condition fields are already defined (there is no start nor end property)
I'm looking for some general solution...
--
edit, a more detailed example to clarify
Each item is completely indepent from one another... let's say that my resources are customers, and that (luckily) I have a couple millions of them in my db.
so the url could be something like
http://myservice/customers/?country=argentina,last_operation=2009-01-01..2010-01-01
It should give me all the customers from argentina that bought anything in the last year
Now I'd like to use this service to build a browse page, or to fill a combo with ajax, for example, so the idea was to add some metada to control what info should I get
to build the browse page I would add
http://...,_page=1,_page_len=10,_order=state,name
and to fill an autosuggest combo with ajax
http://...,_page=1,_page_len=100,_order=state,name,name=what_ever_type_the_user*
to fill the combo with the first 100 customers matching what the user typed...
my question was if there was some standard (written or not) way of encoding this kind of stuff in a restfull url manner...
While there is no standard, Web API Design (by Apigee) is a great book of advice when creating Web APIs. I treat it as a sort of standard, and follow its recommendations whenever I can.
Under "Pagination and partial response" they suggest (page 17):
Use limit and offset
We recommend limit and offset. It is more common, well understood in leading databases, and easy for developers.
/dogs?limit=25&offset=50
There's no standard or convention which defines a way to do this, but using underscores (one or two) to denote meta-info isn't a bad idea. This is what's used to specify member variables by convention in some languages.
Note:
I started writing this as a comment to my previous answer. Then I was going to add it as an edit, but I think that it belongs as a separate answer instead. This is a completely different approach and a separate answer in its own right since it is a different approach.
The more that I have been thinking about this, I think that you really have two different resources that you have to deal with:
A page of resources
Each resource that is collected into the page
I may have missed something (could be... I've been guilty of misinterpretation). Since a page is a resource in its own right, the paging meta-information is really an attribute of the resource so placing it in the URL isn't necessarily the wrong approach. If you consider what can be cached downstream for a page and/or referred to as a resource in the future, the resource is defined by the paging attributes and the query parameters so they should both be in the URL. To continue with my entirely too lengthy response, the page resource would be something like:
http://.../myresource/page-10/3?name=xxx&country=yyy&order=name&orderby=asc
I think that this gets to the core of your original question. If the page itself is a resource, then the URI should describe the page so something like page-10 is my way of saying "a page of 10 items" and the next portion of the page is the page number. The query portion contains the filter.
The other resource names each item that the page contains. How the items are identified should be controlled by what the resources are. I think that a key question is whether the result resources stand on their own or not. How you represent the item resources differs based on this concept.
If the item representations are only appropriate when in the context of the page, then it might be appropriate to include the representation inline. If you do this, then identify them individually and make sure that you can retrieve them using either URI fragment syntax or an additional path element. It seems that the following URLs should result in the fifth item on the third page of ten items:
http://.../myresource/page-10/3?...#5
http://.../myresource/page-10/3/5?...
The largest factor in deciding between these two is how strongly coupled the individual item is with the page. The fragment syntax is considerably more binding than the path element IMHO.
Now, if the item resources are free-standing and the page is simply the result of a query (which I think is likely the case here), then the page resource should be an ordered list of URLs for each item resource. The item resource should be independent of the page resource in this case. You might want to use a URI that is based on the identifying attribute of the item itself. So you might end up with something like:
http://.../myresource/item/42
http://.../myresource/item/307E8599-AD9B-4B32-8612-F8EAF754DFDB
The key deciding factor is whether the items are freestanding resources or not. If they are not, then they are derived from the page URI. If they are freestanding, then they should have their are defined by their own resources and should be included in the page resource as links instead.
I know that the RESTful folk tend to dislike the usage of HTTP headers, but has anyone actually looked into using the HTTP ranges to solve pagination. I wrote a ISAPI extension a few years back that included pagination information along with other non-property information in the URI and I never really like the feel of it. I was thinking about doing something like:
GET http://...?name=xxx&country=xxxx&_orderby=name&_order=asc HTTP/1.1
Range: pageditems=20-29
...
This puts the result set parameters (e.g., _orderby and _order) in the URI and the selection as a Range header. I have a feeling that most HTTP implementations would screw this up though especially since support for non-byte ranges is a MAY in RFC2616. I started thinking more seriously about this after doing a bunch of work with RTSP. The Range header in RTSP is a nice example of extending ranges to handle time as well as bytes.
I guess another way of handling this is to make a separate request for each item on the page as an individual resource in its own right. If your representation allows for this, then you might want to consider it. It is more likely that intermediate caching would work very well with this approach. So your resources would be defined as:
myresource/name=xxx;country=xxx/orderby=name;order=asc/20/
myresource/name=xxx;country=xxx/orderby=name;order=asc/21/
myresource/name=xxx;country=xxx/orderby=name;order=asc/22/
myresource/name=xxx;country=xxx/orderby=name;order=asc/23/
myresource/name=xxx;country=xxx/orderby=name;order=asc/24/
I'm not sure if anyone has tried something like this or not. This would make URIs constructible which is always a useful property IMHO. The bonus to this approach is that the individual responses could be cached and the server is free to optimize handling of collecting pages of items and what not in the most efficient way. The basic idea is to have the client specify the query in the URI and the index of them item that it wants to retrieve. No need to push the idea of a "page" into the resource or even to make it visible. The client can iteratively retrieve objects until it's page is full or it receives a 404.
There is a downside of course... the HTTP server and infrastructure has to support pipelining or the cost of creation/destruction of connections might kill the idea outright.
Related
Our plugin maintains some instance parameter values across many elements, including those in groups.
Occasionally the end users will introduce data that activates an unused Category,
so we have to update the document parameter bindings, to include those categories. However, when we call
doc.ParameterBindings.ReInsert()
our existing parameter values inside groups are lost, because our VariesAcrossGroups flag is toggled back to false?
How did Revit intend this to work - are we supposed to use this in a different way, to not trigger this problem?
ReInsert() expects a base Definition argument, and would usualy get an ExternalDefinition supplied.
To learn, I instead tried to scan through the definition-keys of existing bindings and match those.
This way, I got the document's InternalDefinition, and tried calling Reinsert with that instead
(my hope was, that since its existing InternalDefinition DID include VariesAcrossGroups=true, this would help). Alas, Reinsert doesn't seem to care.
The problem, as you might guess, is that after VariesAcrossGroups=False, a lot of my instance parameters have collapsed into each other, so they all hold identical values. Given that they are IDs, this is less than ideal.
My current (intended) solution is to instead grab a backup of all existing parameter values BEFORE I update the bindings, then after the binding-update and variesAcrossGroups back to true, then inspect all values and re-assign all parameter-values that have been broken. But as you may surmise, this is less than ideal - it will be horribly slow for the users to use our plugin, and frankly it seems like something the revitAPI should take care of, not the plugin developer.
Are we using this the wrong way?
One approach I have considered, is to bind every possibly category I can think of, up front and once only. But I'm not sure that is possible. Categories in themselves are also difficult to work with, as you can only create them indirectly, by using your Project-Document as a factory (i.e. you cannot create a category yourself, you can only indirectly ask the Document to - maybe! - create a category for you, that you request). Because of this, I don't think you can bind for all categories up front - some categories only become available in the document, AFTER you have included a given family/type in your project.
To sum it up: First, I
doc.ParameterBindings.ReInsert()
my binding, with the updated categories. Then, I call
InternalDefinition.SetAllowVaryBetweenGroups()
(after having determined IDEF.VariesAcrossGroups has reverted back to false.)
I am interested to hear the best way to do this, without destroying the client's existing data.
Thank you very much in advance.
(I'm not sure I will accept my own answer).
My answer is just, that you can survive-circumvent this problem,
by scanning the entire revit database for your existing parmater values, before you update the document bindings.
Afterwards, you reset VariesAcrossGroups back to its lost value.
Then, you iterate through your collected parameters, and verify which ones have lost their original value, and reset them back to their intended value.
One trick that speeds this up a bit, is that you can check Element.GroupId <> -1. That is, those elements that are group members.
You only need to track elements which are group members, as it's precisely those that are affected by this Revit bug.
A further tip is, that you should not only watch out for parameter-values that have lost their original value. You must also watch out for parameter-values that have accidentally GOTTEN a value, but which should be left un-set.
I just use FilteredElementCollector with WhereElementIsNotElementType().
Performance-wise, it is of course horrible to do all this,
but given how Revit behaves, I see no other solution if you have to ship to your clients.
I plan to build a server that will use a REST API on NODE.JS with Meteor.
What are the differences between these two methods of writing an API:
1.http://meteorpedia.com/read/REST_API
example: someSrver.com/post/:_id
someSrver.com/post?id=_id
Thanks
I think there's no really difference at least you are handling request correctly.
I think the second style suit better if you have to pass server a lot of parameters for example a service that provide high res image, where you can specify tile size, coordinates and other things like that.
If you use api as an interface to database usually first style is used.
I've develop a rest interface using this library: https://atmospherejs.com/nimble/restivus , it's very easy to use and it use first style.
So, you should read up on REST principles and API design if you want to really understand the why's.
But generally, the rule of thumb is that a URL should represent a resource. The paths generally represent a given "thing" and the querystring represents some kind of filter on that "thing".
So if you "post" object is logically its own entity (e.g.a blog post), then it'd have its own unique url such that a GET to www.example.com/posts/:id would return the one specific blog entry you're talking about.
GET /posts would map to a list of all posts, for example, and GET /posts?tagged=cheetahs would get you a list of all posts filtered to return just those with the tag of 'cheetahs' assigned to them.
This is all rules of thumb and standards. The implementation really doesn't matter and most servers don't care; but there is value in following the standards as they tend to be more maintainable, elegant, and help you not have to make a million design decisions. If you ever want others to integrate with you, it makes it easier for them to know what to expect as well.
According to the URI standard, the query is for non-hierarchical filters while, the path is for hierarchical ones.
I would use query if we are talking about a filtered collection, so the result will be a representation of a collection, for example json []. On the other hand if we are talking about an item, then I would use the path and the json would be an object/item {}. But this is only my own style, you can use which one you prefer. (The URI structure has only routing purposes if you use REST with HATEOAS. I assume you don't.)
In REST the URL/URI is the address of an item or a collection of items. So, to get all addresses for customer 2 you could do this:
/api/customer/2/addresses
If however you wanted just those addresses with a postcode you could go:
/api/customer/2/addresses?withPostcode=1
In this case, the first URL/URI represents a thing/things whereas the second has a modifier, restriction or filter applied to it.
Therefore someSrver.com/post/:_id means get me the post which is known by that ID (though ideally it would be someSrver.com/posts/:_id - note the plural). Whereas the second one (someSrver.com/post?id=_id) implies that everything to the left of the question mark has already narrowed down your thing/things and they now need filtering by an ID property (in this case) on the thing.
It's a subtle distinction in many cases, but I'd sum it up as the first applying a selector/location and the second applying a selector/location with a filter.
Although I didn't implement REST API server in node yet, I want to share with you few important points when you design your server:
Try to use flat paths for the controllers, nested paths are causing confusion.
Avoid custom HTTP methods such as PUT, HEAD and PATCH, not all firewalls like them.
Use applicative error codes with HTTP 200 OK and use the HTTP protocol error codes only for HTTP errors.
See more: http://restafar.com/create-new-rest-server/
I have a requirement to add fields onto a form based on data from another set of entities. Is this possible using an event script or does it require a plugin?
Given that I understand your assignment correctly, it can be done using JavaScript as well as a plugin. There is a significant difference that you need to take into consideration.
Is the change to the other entities to be made only when an actual user loads a form? If so, JS is the right way.
Or perhaps you need to ensure that those values are written even if a console client or system process retrieves the value of the primary entity? In that case, C# is your only option.
EDIT:
Simply accessing the values from any entity in the onload event can be done using a call to oData. I believe someone else asked a similar question recently. The basic format will look like this.
http://Server:Port/Organization
/XrmServices/2011/OrganizationData.svc
/TheEntityLogicalNameOfYoursSet()?$filter=FieldName eq 'ValueOfIt'
Some extra remarks.
If you're targeting on-line installation, the syntax will differ, of course, because the Schema-Server-Port-Organization are provided in a different pattern (https, orgName.crm4.something.something.com etc.). You can look it up on Settings.
Perhaps it should go without saying and I'm sure you realize it but for completeness' sake, TheEntityLogicalNameOfYours needs to be substituted for the actual name (unless that is your actual name, in which case I'll be worried, haha).
If you're new to this whole oData thingy, keep asking. I got the impression that the info I'm giving you is appreciated but not really producing "aha!" experience for you. You might want to ask separate questions, though. Some examples right off the top of my head.
a. "How do I perform oData call in JavaScript?"
b. "How do I access the fetched data?"
c. "How do I add/remove/hide a field programmatically on a form?"
d. "How do I combine data from...?"
In my new project, I have used a lot of Content Query Webparts (CQWP) and then I found that the site was becoming slower and slower when visited with the increasing number of CQWPs.The question I want to ask is:
Does a CQWP take a lot of server resources that make the site slow for visitors?
If I want to query the lists and customize the style of output then can I do it without a CQWP?
Take a look at this link may be you have to use custom XSLT with function to filter the output of CQWP.
http://blog.mastykarz.nl/extending-content-query-web-part-xslt-custom-functions/
For your first question
My answer is : It depends on the several things not only on the number of CQWP on the page.
Let me explain :
The CQWP has so many things to do like fetching data from the List which may be Sharepoint list or custom list ,the resource utilization depends on the logic applied to fetch the data from the list , by saying this I mean the amount of data to be fetched and the Logic complexity to get that data also make sense for the server resource utilization..
For example, If you have class which perform complex logic to get the data like comaprision , if else condition and ForEach Loops and the amount of data avaliable in list is large then it is obvious that it will take more resources from the server.
I hope you get my point
For Second question
My Answer is : You can use CQWp or DVWP(Data View Web Part), but be sure when to use which one.
To get more Idea about both of this take a look at this link
http://www.sharepointblog.co.uk/2012/06/data-view-web-part-vs-content-query-web-part/
Having friendly URLs is generally a good thing. However, there are sometimes when it seems like a bad idea. What is your rule of thumb?
For instance, consider a situation where I want to show a Registration Success page. I want all of the underlying logic to be the same. However, depending on how they registered, I may want to display a different message for someone who registered under a certain type of role.
Here are a few, off-the-cuff examples of "hackable" (as described in link) URLs:
http://www.example.com/RegistrationSuccess.aspx?IsCertainRole=true
http://www.example.com/RegistrationSuccess.aspx?role=CertainRole
http://www.example.com/RegistrationSuccess.aspx?r=2876
All of these seem bad since I don't want the URLs to be discoverable. On the other hand, I hate to do something more complex just to modify the success message slightly.
How would you handle this?
Bear in mind that obfuscating URLs is NOT a security measure. You should never trust outside input - filter, sanitize and implement restrictive logic. No matter how clever you believe your obfuscation scheme to be, people have cracked much more complicated security schemes with relative ease.
As a general rule of thumb - there is no good reason to obfuscate URLs intentionally. Use URLs to communicate read operations (a path to a resource). Use POST requests to communicate write operations (adding/modifying data). If a user isn't supposed to be able to do something through the URL, it should be regulated server side and through the request method.
You can either POST the data, or, if that's not an option, set the value in a Session variable and then read the value in the success page. The actual complexity of code using the Session is about the same as using the query string.
Ok, if you don't think this is a security issue since you are only displaying a different message, then why do you care if its hackable or not?
Most users won't wouldn't notice the url is editable, so why obfuscate? The "elite hackers" will get a slightly different message, big deal.
The general answer to, "Should I obfuscate...?" is no. If its for security, hell no, otherwise why are you obfuscating? Most likely, you are wasting time.
URL's are for uniquely referencing content. When the contents are the results of a process that involves several steps of dialogue, these contents can't really have a URL, because the URL does not reproduce the process.
I would forward them to RegistrationSuccess.aspx and present contents there based on the state of the session.
If somebody comes to that URL without the suitable session state, I would forward them to the front page after 5 seconds of looking at a friendly message explaining that there is nothing to see.
A better choice yet, may be to forward them to MyRegistration.aspx which is something they would perhaps like to make a favourite out of. Coming from the Registration process, it may have a box explaining that they have successfully registered. If they not coming there from the Registration process, this box is not presented. The rest of the page is the summary of all earlier Registration processes for that user.
With a post submission?
If you don't want information in the URL, don't put it in the URL.
It's not always easy to do...
I would say that pages that you want to be easily indexed by the search engines use URL Routing. This includes high traffic pages.
For other pages where the users will only visit few times a month or year you can leave those to be normal urls.
If you must absolutely use the URL for private/personalized data, you'd probably be better off generating a random unique identifier on the server and using that in your URL. Kind of like confirmation e-mails where you have to click a link.
Otherwise, if there's any other way to not include data in the URL, you shouldn't. In the case of a successful registration, either the person just registered and you should be in a current session, or you should require them to login before they see the customized page.
Why not make "registration success" message be a last step, but not change pages?
You can use Ajax or Server.Transfer() to do that.
I could check from a whitelist of referring URLs so that they can't just type in a different URL. That might eliminate obvious "hack" from a passerby.
(Obviously, you can get around this if you're a nerd.)
You could make some sort of checksum or hash on the querystring items, so if they mess around with the URL, the checksum fails and it kicks them out to the main page.