Does anyone know how the screen's attached file functionality is accomplished - either whether they're stored (as BLOBS? - and if so, which table?) or any type of information as to how the link is stored in the database?
As an example, the Bills and Adjustments screen -> Files icon at the top. I can't find any BLC code or even page source code javascript that would show how this is being done or where the storage is taking place (either on the web server or as a BLOB in the db)
Any guidance would be appreciated.
Thanks...
UploadFile contains the list of files
NoteDoc links the NoteId field to the FileId
UploadFileRevision contains the actual binary data in the Data column for each revision of the files. If you use an external storage provider (S3, Azure Blob Storage or Box.com), the data won't be in the record but instead stored externally
Related
I'm trying to resize the image from blob storage using the Azure function - the easy task, lots of samples, works great, but. works only when resized image is saved to a different file. My problem is that I would like to replace the original image with resized one - with the sane location and name.
when I set output blob to be the same as input blob, it is triggered over and over again without the finish.
is there any way I could change blob using azure function and store result in the same file?
The easiest option is to accept two invocations for the same file, but add a check of the size of the incoming file. If the size is already OK, do nothing and quit without changing the file again. This should break you out of the loop.
Blob trigger uses Storage Logs to watch for new or changed blobs. It then compares the changed blob against Blob Receipts in a container named azure-webjobs-hosts in the Azure storage account. Each receipt has ETag associated with it, so when you change a blob, the ETag changes and the Blob is submitted to the function again.
Unless you want to go fancy and update ETag's in receipts from within a function (not sure if it's feasible), your changed files will go for re-processing.
I have created a spotfire visualization and saved it . However when it is tried accessing through web link I am getting below error
Some part of the data table "XXXXxxxxx" could not be loaded. A data source may be missing or has been changed.
I have done below settings :
1) The data is loaded as "Linked to source".
2) Data connections Properties -->Data connections Settings -->data Source--> --> Credentials--> credentials is given (Profile credentials has been shared)
3) I have used an Stored procedure and it is been created under the database which has spotfire access (including schema).
Please help me to solve the issue.
You mentioned that you accessed the DXP via a web link, which suggests to me that you are using WebPlayer. WebPlayer will not have access to files that are stored locally. WebPlayer will load data remotely using database connections and Information Links, however.
I assume that the data load properly when you load the analysis using the desktop client?
A workaround would be to save a copy of the file for access in WebPlayer and select "Stored Data" (i.e. embedded) in the "Change all applicable data sources to:" dropdown.
This issue has been resolved . This is due to the temp tables used in the query, it acts weirdly when the temp table used in the stored procedure (as per my observation)
We have a Kentico 9 instance with media library integrated with Azure blob storage. This means that Kentico's default media selector form control returns an absolute URL of the Azure blob. However, as well as the URL, I need to access the media file info object itself to get additional properties (such as file width).
In the past when using Kentico's own file storage I've been able to build a custom media selector and pull the media file GUID from the returned URL. However, this isn't possible when integrating with Azure storage. Does anyone have any ideas how I might get the file ID or GUID without building my own media selector from scratch?
How about using custom form control with an UniSelector control to which you would pass all files from your azure media library?
You could get the files using something like:
var mediaLibrary = MediaLibraryInfoProvider.GetMediaLibraryInfo("MyAzureLibrary", "SiteName");
var mediaFiles = MediaFileInfoProvider.GetMediaFiles()
.Columns("FileName", "FilePath", "FileGUID")
.WhereEquals("FileLibraryID", mediaLibrary.LibraryID);
This way you could get "nice" dialog that would list all the files in particular folder and you could set up the UniSelector to store GUIDS of those files instead of their paths.
The disadvantage of this is that you don't get the nice tree view as you do in Media library. Once you have the GUID of file, you can then reconstruct the full absolute URL.
If you wanted to have the tree view you could use the CMSTreeView control, but it is more complicated and you would probably need to place it inside a modal window so that it doesn't overflow with other content. Modifying the built-in MediaSelector form control is not really possibly because its under the source code.
Try to enable the following setting:
Content -> Media -> Security -> Check files permissions
In that case inserted media URLs should remain as permanent URLs (because the media handler needs to check the permissions) and you should be able to extract the GUID from the URL as you are used to.
I am using Azure Storage Nodejs and what i need to do is to copy image from one blob to another.
First i tried to getBlobToFile to get the image on temp location in disk and then just createBlockBlobFromFile from that temp location. That method did the task, but for some reason it didn't copied completely in 10% of cases.
The i was trying to use getBlobToText and the result of that put into createBlockBlobFromText, also tried to put options which is need blob to be image container. That method failed completely, image not even opened after copy.
Perhaps there is a way to copy blob file and paste it in other blobl but i didn't find that method.
What else can i do?
I'm not sure what your particular copy-error is, but... with getLocalBlobToFile(), you're actually physically moving blob content from blob storage to your VM (or local machine), and then with createBlockBlobFromLocalFile() you're pushing the entire contents back to blob storage, which is resulting in two physical network moves.
The Azure Storage system supports blob-copy as a 1st-class operation. While it's available via REST API call, it's also wrapped in the same SDK you're using, in the method BlobService.startCopyBlob() (source code here). This will instruct the storage to initiate an async copy operation, completely within the storage system (meaning no download+upload on your side). You'll be able to set source and destination, set timeouts, etc. (all parameters are fully documented in the source code).
The link in the accepted answer is broken, although the method is correct: the method startCopyBlob is documented here
(Updated: Jan 3, 2020) https://learn.microsoft.com/en-us/javascript/api/azure-storage/BlobService?view=azure-node-latest#azure_storage_BlobService_createBlockBlobFromLocalFile
(The old link) https://learn.microsoft.com/en-us/javascript/api/azure-storage/BlobService?view=azure-node-latest#azure_storage_BlobService_createBlockBlobFromLocalFile
if i upload a file on azure blob in the same container where the file is existing already, it is over-writing the file, how to avoid overwriting the same? below i am mentioning the scenario...
step1 - upload file "abc.jpg" on azure in container called say "filecontainer"
step2 - once it gets uploaded, try uploading some different file with the same name to the same container
Output - it will overwrite existing file with the latest uploaded
My Requirement - i want to avoid this overwrite, as different people may upload files having same name to my container.
Please help
P.S.
-i do not want to create different containers for different users
-i am using REST API with Java
Windows Azure Blob Storage supports conditional headers using which you can prevent overwriting of blobs. You can read more about conditional headers here: http://msdn.microsoft.com/en-us/library/windowsazure/dd179371.aspx.
Since you want that a blob should not be overwritten, you would need to specify If-None-Match conditional header and set it's value to *. This would cause the upload operation to fail with Precondition Failed (412) error.
Other idea would be to check for blob's existence just before uploading (by fetching it's properties) however I would not recommend this approach as it may lead to some concurrency issues.
You have no control over the name your users upload their files with. You, however, have control over the name you store those files with. The standard way is to generate a Guid and name each file accordingly. The chances of conflict is almost zero.
A simple pseudocode looks like this:
//generate a Guid and rename the file the user uploaded with the generated Guid
//store the name of the file in a dbase or what-have-you with the Guid
//upload the file to the blob storage using the name you generated above
Hope that helps.
Let me put it that way:
step one - user X uploads file "abc1.jpg" and you save it io a local folder XYZ
step two - user Y uploads another file with same name "abc1.jpg", and now you save it again in a local folder XYZ
What do you do now?
With this I am illustrating that your question does not relate to Azure in any way!
Just do not rely on original file names when saving files. Where-ever you are saving them. Generate random names (GUIDs for example) and "attach" the original name as meta-data.