Nuxeo : Document or collection located in more than one location - nuxeo

Is it possible to have "Document" or "Collection" located in more than one location into the base ? For example, can i have "Document" or "Collection" linked to more than one folder ? From what i understand it is not possible.

What you look for is called a "Proxy". See NXDOC/Repository+Concepts.
A proxy is very much like a symbolic link on an Unix-like OS: a proxy
points to a document and will look like a document from the user point
of view:
The proxy will have the same metadata as the target document, The
proxy will hold the same files as the target documents (since file is
a special kind of metadata).
A proxy can point to a live document or to a version (check in
archived version).
Proxies are used to be able to see the same document from several
places without having to duplicate any data.
(...)
If a proxy can not hold metadata, it can hold security descriptors
(ACP/ACL). So a user may be able to see one proxy and not an other.

Related

How to hide the actual file name when accessing files through Amazon CloudFront?

There is a S3 bucket with millions of MP3 files in them. Each music file along with its preview MP3 file is in a folder inside that bucket. For example for a given music:
/music/123456/file_master.mp3
/music/123456/file_preview.mp3
I want to let the end-users to have access to the preview file through CloudFront and its Web Streaming feature. So I have set up Cloud Front and so uses can click on a link which points to the file on CloudFront:
http://blahblah.cloudfont.net/music/123456/file_preview.mp3
It works perfect except that a user can grab the file URL, replace the _preview part with _master and then listen to the entire track. Unfortunately moving the master file and preview file to two different locations is not an option because not only there are millions of them but also an ingestion system is constantly publishing the files with that structure.
Is there a way to hide the file name and/or file path? e.g. something like http://blahblah.cloudfront.net/music/123456/ABC would be perfect.
I don't think CloudFront supports the kind of URL-rewriting that you propose, but you might be able to solve your problem by adding a new Behavior for the CloudFront distribution and use the "Path Pattern" in that behavior to only match f.ex. "*_preview.mp3" and then use the behavior precedence to put that new behaviour in front of the default behavior of the distribution (behaviors are handled in sequential order with first match), and finally set the default behavior to have "Restrict Viewer Access" set to "Yes" while you set "Restricted Viewer Access" to "No" in the new behavior that then only matches "*_preview.mp3"
See http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesPathPattern for more information in regards to the Path Pattern.

Restkit: GET remote linked object when foreign key refers to missing local object in Core Data

I cannot figure out if Restkit is able to download an object not present locally, in particular when a foreign key is referring to that missing object in a to-one relationship in Core Data.
Take the following example:
- contact 1 refers to company 2
contact 1 is present in the local db but company 2 is not
when in the UI the user inspects the details of contact 1, then a GET for the contact 1 is performed
the GET returns a JSON containing among other contact details the property company_id = 2
I have already setup Restkit via the foreign key convention where I store the foreign key in the contact object (companyID) and I link it to the remote identifier (identifier) stored in the company object:
[contactResponseMapping addConnectionForRelationship:#"forCustomer" connectedBy:#{#"companyID" : #"identifier"}]
I do not manage in this case to configure restkit to download the entity (referred to by the company_id) if not present in the local db.
Is this possible?
The alternative workaround would be to override the companyID setter and double check if the entity exists and if not download the entity, although this is not desirable to me as I have setup an engine that everytime that an object is edited receives a notification and post it to the server. This means I'd have to block the thread execution until the linked object is downloaded.
Also is this mechanism called lazy loading? or hydrating entities?
I cannot find any other similar cases around.
Hope you can help, going a bit crazy on this.
Thanks a lot.
PS: I am using restkit 0.21.0 (i.e. the latest development release which Blake Watters confirmed to be stable)
This is not a feature that RestKit currently offers (probably because of the types of issues you discuss).
For your workaround, consider what your engine is doing in relation to relationship edits - how are they pushed back to the server? Are they always pushed?
Also, think about creating stub objects for your 'foreign' objects so that at least some representation always exists (there are a couple of ways to do this, you can setup mappings to accomplish the task). Then when you come to use one of these objects you can hydrate / lazy load it.
See also Clarifying terminology : "Hydrating" an entity : Fetching properties from the DB.

How to split a cmis url into repository path and path with respect to repository?

I'm a GSoC'13 intern. I'm working on developing a CMIS UCP for Apache OpenOffice.
I wanted to know how to divide a url into its parts.
To fill the session parameters I need the url + path of the object in repo. separately.
Is there any other way?
If you know the path of an object, you can retrieve it using getObjectByPath, which is a method on org.apache.chemistry.opencmis.client.api.Session.
If you have an object and you want to know its path, you can call the object's getPaths() method, which returns a list of paths for the object (in repositories that support multi-filing, documents can have multiple paths, but folders can never be multi-filed).
The actual URL you would construct to navigate directly to the object using its path is repository-specific, unless you are using the browser binding (new in CMIS 1.1). But there aren't any production implementations of the CMIS 1.1 browser binding yet.

Drupal 7: how to restrict file access to specific user roles

I need to develop a site on Drupal 7. I have some content types with File fields in CCK. And access to nodes of these types should be granted only to specific Drupal user role. And at any moment site administrator should be able to make these Nodes 'public' or 'private'.
I can make nodes visible only to specific user roles, but this is not secure enough. If anonymous user knows the path to file ( www.mysite.org/hidden_files/file1 ), he can download it.
What is the most elegant way to solve this problem?
Thanks in advance.
Check out this documentation here: http://drupal.org/documentation/modules/file
Specifically, the section titled "Managing file locations and access" which talks about setting up a private data store (all supported by Drupal 7, it just needs to be configured).
To paraphrase, create a folder such as:
sites/default/files/private
Put a .htaccess file in that folder with the following to prevent direct access to the files via the web:
Deny from all
(the documentation claims that the following step does the above steps automatically, I haven't tested that unfortunately but you may be able to save some time if you skip the above two steps)
Log into Drupal's admin interface, go to /admin/config/media/file-system, configure the private URL and select Private Files Served by Drupal as the default download method.
In order to define the fine-grained access to nodes and fields, you can use Content Access: http://drupal.org/project/content_access
You will also need to edit your content types and set the file / image upload fields to save the uploaded files into Private Files instead of Public Files.
At this point, the node and field level permissions will determine whether or not users are allowed to access the files which will be served through menu hooks that verify credentials before serving the file.
Hope this helps.

Cannot crawl complex URL's without setting a site-wide rule to 'crawl as http content'

I have pages within a site containing a control that uses a query string to provide dynamic data to the user (http://site/pages/example.aspx?id=1).
I can get my content source to index these dynamic pages only if I create a rule which sets the root site (http://site/*) to 'include complex urls' and 'crawl sharepoint content as http content'. This is NOT acceptable as changing the crawling protocol from SharePoint's to HTTP will prevent any metadata from being collected on the indexed items. The managed metadata feature is a critical component to our SharePoint applications.
To dispel any wondering of whether or not this is simply a configuration error on my part refer to http://social.technet.microsoft.com/Forums/en-US/sharepointsearch/thread/4ff26b26-84ab-4f5f-a14a-48ab7ec121d5 . The issue mentioned is my exact problem but the solution is unusable as I mentioned before.
Keep in mind this is for an external publishing site and my search scope is being trimmed using content classes to only include documents/pages (STS_List_850 and STS_ListItem_DocumentLibrary). Creating a new web site content source and adding it to my scope presents 2 problems: duplicate content in scope and no content class defining it that I know of.
What options do I have?
Just a thought: maybe you should create two data sources, one - SharePoint - for metadata and items and one - HTTP - for the pages. Set rules on each one to exclude the other's content. Would that solve your problem?
I have decided to take a different approach to this problem as combining dynamic http content and sharepoint content into one scope is a non trivial problem and is better suited to a entirely new project and not a retrofit as I was attempting.
If you have dynamic content from a separate system which you want to crawl without sacrificing SharePoint metadata information from the rest of your site it seems the only option is to write a BCS application/search connector, crawl the two content sources separately and combine them with a scope and possibly an extended core results webpart. Good luck!

Resources