IBM Connections API - Uploading files in a community and getting nonce - Widget - ibm-connections

There is a requirement to upload a file into a Community from an widget ( instead of going to files sections within a community and upload )
Can we upload files from an widget into a Community Files Section. I found this article here which talks about uploading files, but not under Community section. Is uploading files through community section possible ? Any reference would help
How do we get a Nonce in a widget ? Do we still need to pass authentication parameters, or can use those from the currently logged in user.

You should look at
http://www-10.lotus.com/ldd/appdevwiki.nsf/xpAPIViewer.xsp?lookupName=API+Reference#action=openDocument&res_title=Adding_a_file_using_a_multipart_POST_ic50&content=apicontent
* however you
you want to post this url pattern
https://{serverUrl}/files/basic/api/communitylibrary/{communityUuid}/feed
For the nonce
if you are in an iWidget specifically, you can look at the csrf_token and use that in the X-Update-Nonce header.
However, there is no guarantee that it remains named csrf_token. You should use http://www-10.lotus.com/ldd/appdevwiki.nsf/xpAPIViewer.xsp?lookupName=API+Reference#action=openDocument&res_title=Getting_a_cryptographic_key_ic50&content=apicontent

Related

Prevent Cross Site Scripting but still support HTML file upload

I have a web application where user can upload and view files. The user has a link next to the file (s)he has uploaded. Clicking on the link will open the file in the browser (if possible) or show the download dialog (of the browser). Meaning that, if the user upload an html/pdf/txt file it will be rendered in the browser but if it is a word document, it will be downloaded.
It is identified that rendering the HTML file in the browser could be a vulnerability - Cross Site Scripting.
What is the right solution to this problem? The two options I am currently looking at are:
to put Content-Disposition header in the response to make HTML files downloaded instead viewed in the browser.
to find some html scrubbing/sanitizing library to remove any javascript from the file before I serve it.
Looking at the gmail, they do the second approach (of scrubbing) with having a separate domain for the file download - may be to minimize/distract the attack surface. However in this approach the receiver gets a different file than what was sent. Which is not 'right' in my opinion; may be I am biased. In my case, the first one is easy to fix. But I wonder if that is enough, or is there any thing that I overlook!
What are your thoughts on these approaches? Or do you have any other suggestions?
Based on your description, I can see 3 posible attack types (maybe there are more):
Client side code execution
As you said, your web server may serve a file as HTML and run javascript code on the client. This can be avoided with Content-Disposition but I would go with MIME types control through Content-Type. I would define my known type of files (e.g. pdf, jpeg etc.) and serve them with their respective MIME type (e.g. application/pdf, image/jpeg etc.). Anything else I would serve it as application/octet-stream.
Server side code execution
Althougth I see this as an out of topic attack (since it involves other parts of your application and your server) be sure to avoid executing files on the server (e.g. PHP code through LFI). Your webserver should not access directly the files (e.g. again PHP), better store them somethere not accesible through a URL and retrive them on request.
Think if here you are able to reject files (e.g. reject .exe uploads) and ask the user to zip them first.
Trust issues
Since the files are under the same domain, the files will be accesible from javascript (ajax or load as script) and other programs (or people) may trust your links. This is also related to the previous point, if you don't need unzipped exe files, don't allow them. Using an other domain may mitigate some trust problems.
Other ideas:
Zip all files uploaded
Scan each file with antivirus software
PS: For me sanitization would not work in your case. The risk of missing something is too high.

AWS S3 generate_presigned_url vs generate_presigned_post for uploading files

I was working on uploading and downloading a file to S3 bucket using pre-signed URLs.I came across these two methods generate_presigned_url('put_object') and generate_presigned_post.
What is the difference between these two methods?
# upload a file to a bucket with generate_presigned_url with put object
s3_client.generate_presigned_url('put_object', Params= {'Bucket': "BUCKET_NAME",
"Key":"OBJECT_KEY"},
ExpiresIn=3600)
# upload a file to a bucket using presigned post
s3_client.generate_presigned_post(Bucket="BUCKET_NAME", Key="OBJECT_PATH",
ExpiresIn=3600)
Could someone please explain the difference between both?
If we have generate_presigned_post why was there a generate_presigned_url method with put_object for uploading in the first place.
Note : I know that generate_presigned_post is the recommended method for file uploads and I have used the same. However, there is no clear documentation on the difference between these methods.
This is an extended version of #jellycsc's comment. I had posted the same query to aws support as well. I got the below answer from them.
More detailed explanation is given here
Posting here as it could be useful for someone.
What is the difference between these two methods?
generate_presigned_post() is more powerful because of the POST Policy feature. The POST Policy is simply conditions you set when creating the presigned POST. Using it, you can allow certain MIME types and file extensions, allow multiple files to be uploaded with a given prefix, restrict the file size, and more, which is not possible in generate_presigned_url()
Please note that both the methods can be used to fulfill the same goal, i.e provide controlled way for users to upload files directly to S3 buckets. The process is also the same for both as the backend needs to sign the request after validating that the user is authorized then the browser sends the file directly to S3.
Differences:
URLStructure:
PUT URLs encode everything in the URL itself as there is nothing else communicated back to the client. This means fewer variables can be customized.
POST URLs use multiple fields for different kinds of information. The signing algorithm returns a list of fields along with the URL itself and the client must send those to S3 as well while accessing the presigned URL.
While PUT URLs provide a destination to upload files without any other required parts, POST URLs are made for forms that can send multiple fields. However, their usage is not limited to forms.
Content Type
For PUT URLs the signing must be done for a specific content type. That means you either hardcode a content type on the backend, for example, application/xml if you want to allow users to upload XML documents, or the client must send the desired content type as part of the signing request.
For POST URLs the policy supports a prefix constraint as well as an exact match.
Content-Length:
In case of PUT URLs, you have no control over the size of the uploaded file.
For POST URLs you can set an allowed range in the policy.
Sample presigned post in python:
response = s3_client.generate_presigned_post(Bucket="BUCKET_NAME",
Key="S3KEY",
Fields={"Content-Type": "image/jpg"},
Conditions=["starts-with", "$Content-Type", "image/"],
ExpiresIn=3600)

Is it possible to have a link to raw content of file in Azure DevOps

It's possible to generate a link to raw content of the file in GitHub, is it possible to do with VSTS/DevOps?
Even after reading the existing answers, I still struggled with this a bit, so I wanted to leave a bit more of a thorough response.
As others have said, the pattern is (query split onto separate lines for ease of reading):
https://dev.azure.com/{{organization}}/{{project}}/_apis/sourceProviders/{{providerName}}/filecontents
?repository={{repository}}
&path={{path}}
&commitOrBranch={{commitOrBranch}}
&api-version=5.0-preview.1
But how do you find the values for these variables? If you go into your Azure DevOps, choose Repos > Files from the left navigation, and select a particular file, your current url should look something like this:
https://dev.azure.com/{{organization}}/{{project}}/_git/{{repository}}?path=%2Fpackage.json
You should use those values for organization, project, and repository. For path, you'll see an HTTP encoded version of the unix file path. %2F is the HTTP encoding for /, so that path is actually just /package.json (a tool like Postman will do that encoding for you).
Commit or branch is pretty self explanatory; you either know what you want for this value or you should use master. I have "hard-coded" the api version in the above url because that's what the documentation currently points to.
For the last variable, you need providerName. In short, you should probably use TfsGit. I got this value from looking through the list of source providers and looking for one with a value of true for supportedCapabilities.queryFileContents.
However, if you just request this URL you'll get a "203 Non-Authoritative Information" response back because you still need to authenticate yourself. Referring again to the same documentation, it says to use Basic auth with any value for the username and a personal access token for the password. You can create a personal access token at https://dev.azure.com/{{organization}}/_usersSettings/tokens; ensure that it has the Token Administration - Read & Manage permission.
If you're unfamiliar with this sort of thing, again Postman is super helpful with getting these requests working before you get into the code.
So if you have a repository with a src directory at the root, and you're trying to get the file contents of src/package.json, your URL should look something like:
https://dev.azure.com/{{organization}}/{{project}}/_apis/sourceProviders/TfsGit/filecontents?repository={{repository}}&commitOrBranch=master&api-version={{api-version}}&path=src%2Fpackage.json
And don't forget the basic auth!
Sure, here's the rests call needed:
GET https://feeds.dev.azure.com/{organization}/_apis/packaging/Feeds/{feedId}/packages/{packageId}?includeAllVersions={includeAllVersions}&includeUrls={includeUrls}&isListed={isListed}&isRelease={isRelease}&includeDeleted={includeDeleted}&includeDescription={includeDescription}&api-version=5.0-preview.1
https://learn.microsoft.com/en-us/rest/api/azure/devops/artifacts/artifact%20%20details/get%20package?view=azure-devops-rest-5.0#package
I was able to get the raw contents of a file using this URL.
GET https://dev.azure.com/{organization}/{project}/_apis/sourceProviders/{providerName}/filecontents?serviceEndpointId={serviceEndpointId}&repository={repository}&commitOrBranch={commitOrBranch}&path={path}&api-version=5.0-preview.1
I got this from here.
https://learn.microsoft.com/en-us/rest/api/azure/devops/build/source%20providers/get%20file%20contents?view=azure-devops-rest-5.0
You can obtain the raw URL using chrome.
Turn on Developer tools and view the Network tab.
Navigate to view the required file in the DevOps portal (Content panel). Once the content view is visible check the network tab again and find the URL which starts with "Items?Path", this is json response which contains the required "url:" element.
Drag the filename from the attachments windows and drop it in to any other MS application to get the raw URL or linked filename.
Most answers address this well, but in context of a public repo with anonymous access the api is different. Here is the one that works in such a scenario:
https://dev.azure.com/{{your_user_name}}/{{project_name}}/_apis/git/repositories/{{repo_name_encoded}}/items?scopePath={{path_to_your_file}}&api-version=6.0
This is the exact equivalent of the "raw" url provided by Github.
Another way that may be helpful if you want to quickly get the raw URL for a specific file that you are browsing:
install the browser extension named "Undisposition"
from the dot menu (top right) choose "Download": the file will open in a new browser tab from which you can copy the URL
(edit: unfortunately this will only work for file types that the browser knows how to open, otherwise it will still offer to download it...)
I am fairly new to this and had an issue accessing a raw file in an Azure DevOps Repo. It's straightforward in Github.
I wanted to download a file in CMD and BASH using Curl.
First I browsed to the file contents in the browser make a note of the bold sections:
https://dev.azure.com/**myOrg**/_git/**myProjectName**?path=%2F**MyFileName.ps1**
I then constructed the URL similar to what #Zach posted above.
https://dev.azure.com/**myOrg**/**myProjectName**/_apis/sourceProviders/TfsGit/filecontents?repository=**myProjectName**&commitOrBranch=**master**&api-version=5.0-preview.1&path=%2F**MyFileName.ps1**
Now when I paste the above URL in the browser it displays the content in RAW form similar to GitHub.
The difference was I had to setup a PAT (Personal Access Token) in My Azure DevOps account then authenticate the URL in DOS/BASH example below:
curl -u "<username>:<password>" "https://dev.azure.com/myOrg/myProjectName/_apis/sourceProviders/TfsGit/filecontents?repository=myProjectName&commitOrBranch=master&api-version=5.0-preview.1&path=%2FMyFileName.ps1" -# -L -o MyFileName.ps1

ESRI GPDataFile as input Parameter to GP Toolbox

Im working with a PHP script that POSTs to a GPService Toolbox (written in python), the first parameter is supposed to be a GPDataFile. From the documentation, it looks like I can set the value of this parameter to a json formatted string literal, {"url", "http://localhost/export/1234567890.kml"}, and the arcpy.GetParameter(0) should handle this object correctly.
Unfortunately I am receiving an error, saying 'Please check your parameters', there are two other parameters on the toolbox but they are just strings and are working correctly. I am working in ArcGIS 10.0.
The overall goal of this interaction is to send a KML file from our SWF/ActionScript to the PHP, which saves the KML to our database and subsequently sends it to the GPService to translate it into a GDB and then to individual shapefile objects that are stored in the database for rendering back to the SWF/Actionscript.
Any help our thoughts on how to get the Toolbox to accept the JSON structure would be greatly appreciated, I would like to avoid having to send the KML contents as a string object to the Toolbox.
Answer can be what maniksundaram wrote in ESRI forum (https://community.esri.com/thread/107738):
ArcGIS server will not support direct GPDataFile upload. You have to upload the file using upload task and give the item id for the GP service.
Here is the high level idea to get it work for any GP service which needs file upload,
-Publish the Geoprocessing service with upload option
Refer : ArcGIS Help (10.2, 10.2.1, and 10.2.2)
Operations allowed: Uploads: This capability controls whether a client can upload a file to your GIS server that the tasks within the geoprocessing service would eventually use. The upload operation is mainly used by web clients that need a way to send a file to the server for processing. The upload operation returns a unique ID for the file after the upload completes, which the web application could pass to the geoprocessing service. You may need to modify the maximum file size and timeouts depending on how large an upload you want your server to accept. Check the local REST SDK documentation installed on your ArcGIS Server machine for information on using an uploaded file with a geoprocessing service. This option is off by default. Allowing uploads to your service could possibly pose a security risk. Only turn this on if you need it.
-Upload the file using the upload url that is generated in the geoprocessing service . It will give you the itemID of the uploaded file in response.
http://<servername>:6080/arcgis/rest/services/GP/ConvertKMLToLayer/GPServer/uploads/upload
Response Json:
{"success":true,"item":{"itemID":"ie84b9b8a-5007-4337-8b6f-2477c79cde58","itemName":"SStation.csv","description":null,"date":1409942441508,"committed":true}}
-Invoke the geoprocessing service with the item id as the GPDataFile input ,
For Ex: KMLInput value would be {"itemID":"ie84b9b8a-5007-4337-8b6f-2477c79cde58"}
-The result will be added to map service with job id if you have configured the view the GP results in a map service. Or you can read the response as it returns.

Can I change the logo of an IBM community through the REST API

I found this piece of documentation that suggests that we should be able to PUT a new logo in a community.
But the documentation also states that it is ignored on input.
Before we start intensive troubleshooting, it would help if someone could confirm that we can indeed change the logo programmatically.
Use the web address in the href attribute to obtain an image that represents the community's logo. The following operations are supported:
GET
Use the web address in the href attribute to obtain the community logo image file. If a logo has not been set, a default image is returned.
PUT
Use the web address in the href attribute to upload a new community logo image and replace the current one.
Attention: Specify the content type of the image file being sent with the request. For example: "Content-Type: image/jpeg"
This is the source:
http://www-10.lotus.com/ldd/appdevwiki.nsf/xpDocViewer.xsp?lookupName=IBM+Connections+4.5+API+Documentation#action=openDocument&res_title=Community_entry_content_ic45&content=pdcontent
Using the IBM SBT SDK 1.0.1 I was able to call CommunityService.updateCommunityLogo(new File("/path/to/my.jpeg"), communityUuid) without any error, but the JPEG I referred to was not set as the community logo however.
Maybe the size was not correct?
Sorry for this "non-answer", but it may help other people anyway: To fix the SBT SDK code at least :-/
UPDATE 2014-JUN-25:
I took a deeper dive into the http.wire logs, and surprisingly the call seems to trigger a logout (or session invalidation) without further notice. The REST request receives 200 OK but also some JavaScript looking like "hey guy, confirm who you are", and the browser prompts with a full-window Connections login prompt, although the LTPA token should not have timed out yet.
This is annoying, too, for another reason: If Connections is used inside a framed UI, the "main" application is wiped away after that, forcing Connections to full-window mode.
With IBM SBT SDK 1.0.3 (as of July 17, 2014) and IC5 it is working now. I had no opportunity to test this feature with 1.0.3 and IC45 however, but with 1.0.2 and IC5 it did NOT work; so it seems that something in 1.0.3 has been fixed here.
#mpjjonker you can look at CommunityService.java
the method updateCommunityLogo talks about using /communities/service/html/image URL to put the image.
String url = "/communities/service/html/image";
getClientService().put(url, parameters, headers, file, ClientService.FORMAT_NULL);

Resources