Q1: Where is the upload file location in cognos analytics 11?
I need to know the upload file location to backup files
Q2: how to set the upload file size for a single user ??
I know how to set it globally. like this graph
enter image description here
I'm pretty sure that Cognos 11 stores uploaded files in the Content Store database, but I'm not quickly seeing where right now. There are files in the "data files location" (see Cognos Configuration) but looking up a specific uploaded .xlsx file in the Content Store, I can't confirm that it is any of the files in the file system. IBM would tell you you need to use the SDK to get to this content in some automated way that is not through the Cognos Anayltics UI.
I don't think there is a per-user setting for the upload limit. The documentation makes no mention of this setting being available per user.
Q1
It is not clear what you mean by "I need to know the upload file location to backup files"
i.
The uploaded files are stored in the content store.
If you want to back up, create a deployment. You can also back up the CM data base itself.
ii.
If the question is, how do I recover data from an uploaded file, which may have gone missing? You can create a report and add all the columns of the uploaded file and run as excel or CSV. The may not be 100% fidelity of data types etc. The proper answer is, establish proper governance for the use of spreadsheets and the like so that this sort of spreadsheet risk is lessened.
iii.
By default, the default upload location is the user's my content.
You can change that.
You do this by going to manage/ customization / profile
There is a setting to set the default upload location.
Keep in mind that it is possible that a user does not have rights to upload files to a particular folder if he does not have write permissions to that folder.
Q2:
It is not possible to set a per-user upload file limit.
Related
Is there any way upload file to Sharepoint document library together with all properties (in one REST call) ? I found recently if I uploaded a file and the it's properties (doesn't matter what properties), for Sharepoint it's a new version of the file and it's consumes storage as previous version, so for example I upload large file (4gb) and then it's some custom properties , now this files will consume 8gb storage, regardless if file itself was changed or not.
In Sharepoint SOAP it possible, but in REST seems that not.
Thanks
You have to do them separately and it doesn't actually create a copy of the file. SharePoint stores the deltas and in your case, there wouldn't be a change in the document. When you look at the version history, it can be misleading b/c you will see version 1.0 is 4GB and version 2.0 is 4GB which would make you think that you are consuming 8GB but that is not the case. If you were add a 3rd version that was 5GB, you wouldn't have 13GB used. Instead, the database will only store 5GB of data and SharePoint essentially pieces together the file from the database.
I have some files on S3 and would like to view those files in web. Problem is that the files are not public and I dont want them to be public. Google doc viewer works but condition is, files should be public.
Can I use office web apps to show in browser. Since the files are private, I do not want to store any data on Microsoft servers. It looks like even google doc viewer stores the info while parsing.
What is the cleanest way?
Thanks.
I have looked around for something similiar before and there are some apps you can install locally (CyberDuck, S3 Browser, etc). In the browser has been limited until recently (full disclosure I worked on this project).
S3 LENS - https://www.s3lens.com/
I probably get a minus here, but also Microsoft has an online viewer, which works the same way: the file needs to be publicly accessible.
Here is the link: https://view.officeapps.live.com/op/view.aspx
What I cloud add is that those files need to be publicly accessible only for a short period, i.e. until the page gets opened. So you cloud trick them by uploading the file to be viewed to a public temporary storage in a randomly generated folder and give that url to the online viewer.
Of course this is not that safe, since the file will get as some point to the temp storage and then to Google or Microsoft, but the random path names offer some degree of safety.
I've created recently a small glitch app, which demonstrates what I just explained: https://honeysuckle-eye.glitch.me/
It uploads local files to a temp storage and then opens the viewer from that temp storage; the temp storage only last for one download, so it is pretty safe.
I've seen in liferay where we can specify the upload size of a file
dl.file.max.size=
But I haven't found a way to specify how to limit the number of files a user (or community) can upload. Obviously, we don't want a user or community upload massive amounts of files and filling up our shared drive. Is there a way to do this?
There's no such quota - that's why you didn't find it. However, you are able to extend Liferay to contain and honor a quota. It has been demonstrated with this app - source code is linked and you might be able to take it as the basis for building your number-of-files-quota. It also doesn't seem to take the number of files into account, but it keeps an eye over the volume stored already and should be easy to extend.
I'm assuming that the spanish user group would happily accept pullrequests if you have a good extension to this plugin.
Does anyone have guidance and/or example code (which would be awesome) on how I would go about the following?
With a Web application using C# / ASP.NET MVC and hosted on Azure:
Allow a user to upload an Excel Workbook (multiple worksheets) via a web page UI
Populate a Dataset by reading in the worksheets so I can then process the data
Couple of things I'm unclear on:
I've read that Azure doesn't have ACEOLEDB, which is what Excel 2007+ requires, and I'd have to use OPEN XML SDK. Is this true? Is this the only way?
Is it possible to read the file into memory and not actually save it to Azure storage?
I DO NOT need to modify the uploaded spreadsheet. Only read the data in and then throw the spreadsheet away.
Well that's many questions in one post, let me see if we can tackle them one by one
With a Web application using C# / ASP.NET MVC and hosted on Azure:
1.Allow a user to upload an Excel Workbook (multiple worksheets) via a web page UI
2.Populate a Dataset by reading in the worksheets so I can then process the data
Couple of things I'm unclear on:
1.I've read that Azure doesn't have ACEOLEDB, which is what Excel 2007+ requires, and I'd have to use OPEN XML SDK. Is this true? Is
this the only way?
2.Is it possible to read the file into memory and not actually save it to Azure storage?
1/2. You can allow a user to upload the excel workbook to some /temp location and once you have read you can choose to do the cleanup, you can also write a script which can do the cleanup of the files which couldn't get deleted from /temp for whatever reasons.
Alternatively if you want to keep the files, you should store them in Azure Stoarge, and fetch/read when you need to.
check out this thread read excelsheet in azure uploaded as a blob
By default when you upload a file it is wrote into local disk and one later chooses to save the files to azure storage or whatever places.
Reading the excel - you can use any of the nugget packages given here http://nugetmusthaves.com/Tag/Excel and read the excel file, I prefer Gembox and NPOI
http://www.aspdotnet-suresh.com/2014/12/how-to-upload-files-in-asp-net-mvc-razor.html
We are using RBS to store the SharePoint files outside the share point content database. This works nice with SP2010. but when we moved to SharePoint 2013 we found that the file has some extra data and added to the file in the RBS directory. Is there a way to ask the SharePoint not to add this extra binary data. As our users basically access the RBS through a read-only shared folder on the network and we have done a business that depends on that.
You're probably referring to shredded storage - in SP2010, there is a single FILESTREAM blob (.rbsblob) for each file version stored on an SP site (e.g. file is stored in the content database, but its content offloaded to RBS). In SP2013, shredded storage can shred a single file version into multiple smaller pieces and pads them with extra bytes, which means that you can't easily access a single file from the RBS itself - and you're not supposed to either! What that means is that when you upload a file "document.docx", you don't get a single blob, but multiple smaller blobs (depending on the size and settings) that can't easily be merged together - what you see in RBS are these multiple blobs. The best you can do is to prevent files from getting shreded, but it's a dirty way: How to disable Shredded Storage in SharePoint 2013? - however, this will only work for newly uploaded files and has impact on storage and performance (shredded storage enables you to e.g. store only deltas of a single document, say if you have 10 document versions that are 50% the same - these 50% will be shared across all versions as shared shreds instead of being stored multiple times as they would in SP2010).
One option that might be useful is to map a SharePoint site to a network drive and let users work with that instead of RBS directory directly - this way, files will get accessed via RBS without explicitly exposing the blobs (shreds) themselves.
Another option is to download/open a specific RBS-stored file by using SPFile.OpenBinary(), which will merge the RBS-stored shreds and return a single (original) file that you can then store elsewhere (e.g. into another shared folder) - this way, you're duplicating files, but that's pretty much how it's supposed to be anyway. For example, this way you can open a file "document.docx" that is visible on an SP site, but stored in RBS as 5 .rbsblob shreds, and save it as "document.docx" elsewhere (outside of SP).
RBS storage is internal to SharePoint and you should NOT access its files directly. There are no guarantees that the data will remain in the original format as stated here:
Although RBS can be used to store BLOB data externally, accessing or
changing those BLOBs is not supported using any tool or product other
than SharePoint 2013. All access must occur by using SharePoint 2013
only.
Also I'm not aware of any RBS-related configuration for the described behaviour.