How to get a persistent link to document in Shopware 6? - shopware

Is it possible to get a persistent static link to a document media object? If we update a document with new details and do "update and replace" via the admin interface the document still gets a new ID, therefore breaking any external links pointing to a Shopware document.
None of the SHOPWARE_CDN_STRATEGY_DEFAULT options help with this either as all links seem to have the ID (or some other unique value) prepending the filename.
Original document: example.com/12345/filename.pdf
Updated document: example.com/67890/filename.pdf

You're in fact not updating the file but creating a new document, also with it's own unique document number, which is the intended behavior.
This is in the interest of most merchants and their customers, as it would be a serious risk if a customer would find that a previous invoice, that they had received previously, was altered afterwards. That's why the merchant is encouraged to create a new version of the document instead, so both parties still have access to previous versions.
Just for the sake of completion: There is a way to replace the file of an existing document using the admin-api.
There's an endpoint which allows you to upload a file to a document:
POST /api/_action/document/{documentId}/upload?fileName=name_of_the_file&extension=pdf
Content-Type: application/json
{
"url": "http://url.to/some_file.pdf"
}
You can alternatively also upload a file directly using an http client:
const formData = new FormData();
const file = new File([binaryData], 'test.pdf');
formData.append('file', file, 'test.pdf');
client.request({
headers: {
Authorization: `Bearer ${token}`,
'Content-Type': 'multipart/form-data',
},
method: 'POST',
url: `${baseUrl}/api/_action/document/${documentId}/upload?fileName=test&extension=pdf`,
formData,
});
Here's the catch though. The endpoint will not allow you to upload a file for a document, if the document already has been assigned a file. In that case an exception DocumentGenerationException('Document already exists') will be thrown, for the reasons mentioned earlier. You can however circumvent that exception.
Before you upload a new file, you'll have to request the endpoint to patch the document's database entry and unassign the previously assigned file:
PATCH /api/document/{documentId}
Content-Type: application/json
{
"documentMediaFileId": null
}
Afterwards you should be able to upload a new file to an existing document, keeping the deep link code and id the same.

Related

Can I use HTML5 download attribute in anchor tag using an API (instead of direct link)?

I have a web page (served from, say, www.example.com domain after user logged in) that has following html element.
<a href="../api/values/import" download>Download Import File</a>
The import file is currently saved in a azure storage as a blob. Assume that the blob (i.e. import file) requires temporary SAS token (created for logged in user) to be accessed:
https://foo.blob.core.windows.net/myfiles/import.txt?(sasTokenInfo)
All of this can be simplified if the anchor tag can be used as shown below:
<a href="https://foo.blob.core.windows.net/myfiles/import.txt?(sasTokenInfo)" download>Download Import File</a>
However this runs the risk of expired sasTokenInfo before user clicks on it. The user may linger on this page (which has other info on it) sufficiently long enough for the sasToken to expire. The simplest thing to do here is to create sasToken for longer period. But I don't want to do that.
I am trying to find out if better solution is to use the API as shown at the beginning. When the user clicks on the above anchor link (i.e "../api/values/import"), the idea is to have this API create the blob sasToken for this user and send the link (that contains this sasToken) to the above blob storage import file. The idea is not to read this file in the above API but simply send the link to it so that browser can download it directly without involving the domain www.foo.com. To facilitate this, I thought if I can have the following header information, I would be able to force the browser to download this file directly from azure blob storage on the browser's machine:
Content-Disposition: attachment; filename="https://foo.blob.core.windows.net/myfiles/import.txt"
Apparently, the value of "filename" (from Content-Disposition header value) should not contain path info (See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition). This means I am not able to set the direct blob storage related link in this header value.
Questions:
Is the API route a viable option?
If yes, then how can I send a link to a file in the azure storage through my API to the browser so that browser can automatically download the file?
What do I need to do in my asp net core web API for (2)?
What do I need to do on the client-side html pages for (2)?
If yes, then how can I send a link to a file in the azure storage through my API to the browser so that browser can automatically download the file?
I suggest you could use ajax to let the web api to generate the blob url and then create the herf html to download it inside the succss method.
More details, you could refer to below codes:
Client ajax:
<script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
<script>
$.ajax({
url: 'https://localhost:7204/api/values/Download',
method: 'GET',
success: function (data) {
var link = document.createElement('a');
link.href = data;
link.download = 'myFile.txt'; // Set the file name here
document.body.appendChild(link);
link.click();
document.body.removeChild(link);
},
error: function () {
console.log('Failed to download file.');
}
});
</script>
Web API:
[HttpGet("Download")]
public string GetUrl() {
return "https://localhost:7204/111.txt";
}
Result:
It will download the file automatically.

Model Derivative Forge API - Field uploadKey not found in signed URL endpoint

I was looking to play around with the Forge API and am trying to translate a rvt file into a dwg. I am just following the steps given in the "Step-by-Step" tutorials and this second step in Task 2
https://forge.autodesk.com/en/docs/model-derivative/v2/tutorials/translate-to-obj/task2-upload_source_file_to_oss/
says to make a post request to this endpoint to get the signed url https://developer.api.autodesk.com/oss/v2/buckets/<YOUR_BUCKET_KEY>/objects/<YOUR_OBJECT_KEY>/signeds3upload?minutesExpiration=<LIFESPAN_OF_URL>.
I make the request and receive a '{'reason': 'Field uploadKey not found'}'. Which in the steps, it shows you get the uploadKey from this endpoint ? So either I'm missing something really big here, or these steps are too smart for a 5 year old.
Here is what I'm passing into my post request:
header = {
'Authorization': 'Bearer ' + access_token,
'Content-Type': 'application/json'
}
body = {
'ossbucketKey': 'bucketName',
'ossSourceFileObjectKey': 'test.rvt',
'access': 'full',
'policyKey': 'transient'
}
Note that the new direct-to-s3 upload consists of multiple steps:
You generate an upload URL using the GET buckets/:bucketKey/objects/:objectKey/signeds3upload
You upload your data to the URL
You complete the upload using the POST buckets/:bucketKey/objects/:objectKey/signeds3upload endpoint you mentioned, incl. the uploadKey you received in step 1
For more details you can refer to this blog post: https://forge.autodesk.com/blog/data-management-oss-object-storage-service-migrating-direct-s3-approach.

Making a PATCH method request with a file to Common Data Service

I'm trying to create a Python script which can use the PATCH method to upload a file into MS's Common data service. I'm successfully making GET, POST, PATCH, and DELETE calls with simple data, but have so far been unable to configure it so that I can upload a file.
I've been using the Requests library for Python, with the requests.patch function in order to try updating the data. I'm attempting to upload a .csv file into the field, the file which i'm uploading has a filesize of 1kb.
If I upload the data directly into the common data service through the in-built data interface, my browser is able to correctly make a PATCH call. I've attempted to copy the call as closely as I can, but have had zero success.
File field in common data service
PATCH call in web browser
What is the correct way to make a PATCH request with a file to Microsoft's Common data service?
Made a mistake with the url in my request - I had missed out which field I was uploading data to
Incorrect URL:
https://90g9j3gf.crm4.dynamics.com/api/data/v9.0/test_entity(34cd854c-1175-4778-bf95-e1ce12dea3b0)
Corrected URL:
https://90g9j3gf.crm4.dynamics.com/api/data/v9.0/test_entity(34cd854c-1175-4778-bf95-e1ce12dea3b0)/test_field
The code I used to make the request:
Import requests
http_headers = {
'Authorization': 'Bearer ' + token['access_token'],
'Content-Type': 'application/octet-stream',
'x-ms-file-name': 'test.csv'
}
filedata = open("project-folder\\test.csv", "rb")
patch_req = requests.patch(
url, # My URL is defined elsewhere
headers=http_headers,
data=filedata
)
This now works correctly for me

AWS Lambda fails to return PDF file

I have created a lambda function using serverless. This function is fired via API Gateway on a GET request and should return a pdf file from a buffer. I'm using html-pdf to create the buffer and trying to return the pdf file with the following command
let response = {
statusCode: 200,
headers: {'Content-type' : 'application/pdf'},
body: buffer.toString('base64'),
isBase64Encoded : true,
};
return callback(null, response);
but the browser is just failing to load the pdf, so I don't know exactly how to return the pdf file directly to the browser. Could'nt find a solution for that.
well, I found the answer.
The settings in my response object are fine, I just had to manually change the settings in API Gateway for this to work in the browser. I have added "*/*" to binary media types under the binary settings in API Gateway console
API GATEWAY
just log into your console
choose your api
click on binary support in the dropdown
edit binary media type and add "*/*"
FRONTEND
opening the api url in new tab (target="_blank"). Probably the browser is handling the encoded base 64 response, In my case with chrome, the browser just opens the pdf in a new tab exactly like I want it to do.
After spending several hours on this I found out that if you set Content handling to Convert to binary (CONVERT_TO_BINARY) the entire response has to be base64, I would otherwise get an error: Unable to base64 decode the body.
Therefore my response now looks like:
callback(null, buffer.toString('base64'));
The Integration response:
The Method response:
And Binary Media Types:
If you have a gigantic PDF, then it will take a long time for Lambda to return it and in Lambda you are billed per 100ms.
I would save it to S3 first then let the Lambda return the S3 url to the client for downloading.
I was having a similar issue where pdf where downloaded as base64 and started happening when changed the serverles.yml file from:
binaryMediaTypes:
- '*/*'
to
binaryMediaTypes:
- 'application/pdf'
- '....other media types'
The issue is because the way AWS implemented this feature. From aws documentation here:
When a request contains multiple media types in its Accept header, API
Gateway honors only the first Accept media type. If you can't control
the order of the Accept media types and the media type of your binary
content isn't the first in the list, add the first Accept media type
in the binaryMediaTypes list of your API. API Gateway handles all
content types in this list as binary.
Basically if the first media type contained in the accept request header is not in your list in binaryMediaTypes then you will get base64 back.
I checked the request in the browser and the first media type in the accept header was text/html so I got it working after changing my settings to:
binaryMediaTypes:
- 'application/pdf'
- '....other media types'
- 'text/html'
Hope this helps anyone with the same issue.
Above solution is only for particular content-type. You can't more content type.
Follow only below two-step to resolve multiple content type issue.
Click on the checkbox of Use Lambda Proxy integration
API gateway --> API --> method --> integration request
Create your response as
let response = {
statusCode: 200,
headers: {
'Content-type': 'application/pdf',//you can change any content type
'content-disposition': 'attachment; filename=test.pdf' // key of success
},
body: buffer.toString('base64'),
isBase64Encoded: true
};
return response;
Note* - It is not secure
Instead of doing all this. It's better to use serverless-apigw-binary plugin in your serverless.yaml file.
Add
plugins:
- serverless-apigw-binary
custom:
apigwBinary:
types:
- "application/pdf"
Hope that will help someone.

How to set document level permissions via Sharepoint Designer 2013 worklfow

I have a document approval SharePoint Designer 2013 workflow. The workflow reacts on the creation of a new folder inside a document library. The newly created folder will contain new documents uploaded by users. I found out that I can break/set permissions on the newly created folder using REST api:
/_api/web/lists/getByTitle('document library')/items('id of the new folder')/breakroleinheritance(copyRoleAssignments=true,clearSubscopes=true)
My problem is how can I break/set permissions on the documents uploaded inside the new folder, possibly via the rest api? I really cannot find a way to do it. I need to get one level down with respect to the folder to set permissions on single documents. Any help would be really appreciated.
The simple process of setting item level permission is not available for SharePoint 2013 workflows. The only way I was able to do was through REST api called under Appstep.
There are 2 calls made:
BreakRoleInheritance
AddRoleAssignment
This Blog
Via getfilebyserverrelativeurl endpoint
Endpoint Uri: /_api/web/getfilebyserverrelativeurl('<file url>')/ListItemAllFields/breakroleinheritance(true)
Method: POST
Headers {Accept: application/json;odata=verbose, X-RequestDigest: <value>}
where file url is a server relative url to a file
JavaScript example:
function breakRoleInheritance(webUrl,fileUrl) {
return $.ajax({
url: webUrl + "/_api/web/GetFileByServerRelativeUrl('" + fileUrl + "')/ListItemAllFields/breakroleinheritance(copyRoleAssignments=true,clearSubscopes=true)",
type: "POST",
contentType: "application/json;odata=verbose",
headers: {
"Accept": "application/json;odata=verbose",
"X-RequestDigest": $("#__REQUESTDIGEST").val()
}
});
}
Via ListItem resource
Endpoint Uri: /_api/web/lists/getByTitle('<list title>')/items('<id>')/breakroleinheritance(copyRoleAssignments=true,clearSubscopes=true)
Method: POST
Headers {Accept: application/json;odata=verbose, X-RequestDigest: <value>}
where list title list or library title, id - list item associated
with file
You have mentioned "The workflow reacts on the creation of a new folder inside a document library. The newly created folder will contain new documents uploaded by users". I understand that workflow is associated to the Folder content type and when a folder is created the breaking of permission inheritance works fine.
What you are missing is a workflow triggered when documents are uploaded. You need to associate your workflow to either the document content type or all content types, so that the workflow acts on any item that's created - folder or file.
NB: The files inside a folder inherit the permissions of the folder by default.

Resources