Active storage returns attached? as true in console but on rails server it return false - rails-activestorage

I am new to rails ActiveStorage and facing some issues in image uploading.
while i try to upload image it uploaded successfully but when i try to get the image it returns attached as false. But when i try same record in console it return the image url.
Rails server output:

I ran into a similar situation when I had multiple records attached to the same blob.
Not sure if that's what happened here, if you had 2 companies using the same attachment, and then purged that attachment from one record, it will delete both the blob reference and file itself, without removing other associated blobs. This means one record will still sometimes think it has a file attached (as it's still associated with a blob)
A good way to find out is to check out in rails console:
obj.image.blob.filename
This will show if the actual file that's associated with an object exists, rather than just its blob. It's a bug in Active Storage that they're apparently addressing, not sure if it applies here or not.

Related

S3 storage without versionning get cached images

I get a strange problem with my s3 storage.
I develop a web-application which need image storage.
On the application, we need to make many update on them and display them in live so we disabled version management on the bucket.
we store the different url in a postgres DB and display them on the website.
But sometimes (can't really know in which condition because it seems to be random) the app display the old version of the image.
I try set some things in the metadata of my update request but it doesn't seems to do anythings.
Also we tried to add a ?versionId=null params at the end of the url but also we keep having issues
Did someone have an idea on this ?

Pushing documents(blobs) for indexing - Azure Search

I've been working in Azure Search + Azure Blob Storage for while, and I'm getting trouble indexing the incremental changes for new files uploaded.
How can I refresh the index after upload a new file into my blob container? Following my steps after upload file(I'm using rest service to perform these actions): I'm using the Microsoft Azure Storage Explorer [link].
Through this App I've uploaded my new file to a folder already created before. After that, I used the Http REST to perform a 'Run' indexer command, you can see in this [link].
The indexer shows me that my new file was successfully added, but when I go to search the content in this new file is not found.
Please, anybody knows how to add this new file in Index and also how to find this new file by searching for his content?
I'm following Microsoft tutorials, but for this issue, I couldn't find a solution.
Thanks, guys!
Assuming everything is set up correctly, you don't need to do anything special - new blobs will be picked up and indexed the next time indexer runs according to its schedule, or you run the indexer on demand.
However, when you run the indexer on demand, successful completion of the Run Indexer API means that the request to run the indexer has been submitted; it does not mean that the indexer has finished running. To determine when the indexer has actually finished running (and observe the errors, if any), you should use Indexer Status API.
If you still have questions, please let us know your service name and indexer name and we can take a closer look at the telemetry.
I'll try to describe how can I figured out this issue.
Firstly, I've created a DataSource through this command:
POST https://[service name].search.windows.net/datasources?api-version=[api-version]
https://learn.microsoft.com/en-us/rest/api/searchservice/create-data-source.
Secondly, I created the Index:
POST https://[servicename].search.windows.net/indexes?api-version=[api-version]
https://learn.microsoft.com/en-us/rest/api/searchservice/create-index
Finally, I created the Indexer. The problem happened at this moment because it is where all configurations are setted.
POST https://[service name].search.windows.net/indexers?api-version=[api-version]
https://learn.microsoft.com/en-us/rest/api/searchservice/create-indexer
After all these things done. The Index starts indexing all contents automatically (once we have contents into blob storage).
The crucial thing comes now. while your index is trying to extract all 'text' into your files, could occur some issue when the type of file is not 'indexable'. For example, there are two properties that you must pay attention excluded extensions, indexed extensions.
If you don't write the types properly, the Index throws an exception. Then, The Feedback Message(in my opinion is not good, was like a 'miss lead') says to avoid this error you should set the Indexer to '"dataToExtract" : "storageMetadata"'.
This command means that you are trying just index the metadata and no more the content of your files, then you cannot search by this and retrieve.
After that, the same message at the bottom says to avoid these issue you should set two properties (who solved the problem)
"failOnUnprocessableDocument" : false,"failOnUnsupportedContentType" : false
In addition, now everything is working properly. I appreciate your help #Eugene Shvets, and I hope this could be useful for someone else.

SharePoint 2013 REST upload from App works on image, fails on video

Has anyone tried to upload a video via REST to SharePoint 2013 from a SharePoint hosted app?
Below are my two POSTs. The first one, an image, works fine. The second one does add my video but it throws a 404 (Not Found). Subsequent executions do not overwrite but instead create new video files with some random letters afterwards. The subsequent executions also pop the 404.
I should also point out the the overwrite flag is obviously being ignored because it always creates a new file. Further when I tried to use the "manipulated" video URL that you see in a library after uploading it blows with a server error.
My suspicion is that it's because of the way SP2013 handles videos by creating items that don't retain their extension like an image does. Anyone know for sure?
Or know if there's some sort of RESPONSE that is sent back that would cause the 404?
http://app2-6040b7dbcd33cc.sp13apps-qa.PATH/sites/XDevT/CustomNewsFeedEntry/_api/SP.AppContextSite(#TargetSite)/web/lists/getByTitle(#TargetLibrary)/RootFolder/Files/add(url=#TargetFileName,overwrite='true')?#TargetSite=%27http://teamsites13-qa.PATH/sites/XDevT%27&#TargetLibrary=%27NewsFeedVideos%27&#TargetFileName=%27cg-overlay-img.jpg%27
http://app2-6040b7dbcd33cc.sp13apps-qa.PATH/sites/XDevT/CustomNewsFeedEntry/_api/SP.AppContextSite(#TargetSite)/web/lists/getByTitle(#TargetLibrary)/RootFolder/Files/add(url=#TargetFileName,overwrite='true')?#TargetSite=%27http://teamsites13-qa.PATH/sites/XDevT%27&#TargetLibrary=%27NewsFeedVideos%27&#TargetFileName=%27WMV_Movie.wmv%27

Avoid over-writing blobs AZURE

if i upload a file on azure blob in the same container where the file is existing already, it is over-writing the file, how to avoid overwriting the same? below i am mentioning the scenario...
step1 - upload file "abc.jpg" on azure in container called say "filecontainer"
step2 - once it gets uploaded, try uploading some different file with the same name to the same container
Output - it will overwrite existing file with the latest uploaded
My Requirement - i want to avoid this overwrite, as different people may upload files having same name to my container.
Please help
P.S.
-i do not want to create different containers for different users
-i am using REST API with Java
Windows Azure Blob Storage supports conditional headers using which you can prevent overwriting of blobs. You can read more about conditional headers here: http://msdn.microsoft.com/en-us/library/windowsazure/dd179371.aspx.
Since you want that a blob should not be overwritten, you would need to specify If-None-Match conditional header and set it's value to *. This would cause the upload operation to fail with Precondition Failed (412) error.
Other idea would be to check for blob's existence just before uploading (by fetching it's properties) however I would not recommend this approach as it may lead to some concurrency issues.
You have no control over the name your users upload their files with. You, however, have control over the name you store those files with. The standard way is to generate a Guid and name each file accordingly. The chances of conflict is almost zero.
A simple pseudocode looks like this:
//generate a Guid and rename the file the user uploaded with the generated Guid
//store the name of the file in a dbase or what-have-you with the Guid
//upload the file to the blob storage using the name you generated above
Hope that helps.
Let me put it that way:
step one - user X uploads file "abc1.jpg" and you save it io a local folder XYZ
step two - user Y uploads another file with same name "abc1.jpg", and now you save it again in a local folder XYZ
What do you do now?
With this I am illustrating that your question does not relate to Azure in any way!
Just do not rely on original file names when saving files. Where-ever you are saving them. Generate random names (GUIDs for example) and "attach" the original name as meta-data.

Amazon S3 Browser Based Upload - Prevent Overwrites

We are using Amazon S3 for images on our website and users upload the images/files directly to S3 through our website. In our policy file we ensure it "begins-with" "upload/". Anyone is able to see the full urls of these images since they are publicly readable images after they are uploaded. Could a hacker come in and use the policy data in the javascript and the url of the image to overwrite these images with their data? I see no way to prevent overwrites after uploading once. The only solution I've seen is to copy/rename the file to a folder that is not publicly writeable but that requires downloading the image then uploading it again to S3 (since Amazon can't really rename in place)
If I understood you correctly The images are uploaded to Amazon S3 storage via your server application.
So the Amazon S3 write permission has only your application. Clients can upload images only throw your application (which will store them on S3). Hacker can only force your application to upload image with same name and rewrite the original one.
How do you handle the situation when user upload a image with a name that already exists in your S3 storage?
Consider following actions:
First user upload a image some-name.jpg
Your app stores that image in S3 under name upload-some-name.jpg
Second user upload a image some-name.jpg
Will your application overwrite the original one stored in S3?
I think the question implies the content goes directly through to S3 from the browser, using a policy file supplied by the server. If that policy file has set an expiration, for example, one day in the future, then the policy becomes invalid after that. Additionally, you can set a starts-with condition on the writeable path.
So the only way a hacker could use your policy files to maliciously overwrite files is to get a new policy file, and then overwrite files only in the path specified. But by that point, you will have had the chance to refuse to provide the policy file, since I assume that is something that happens after authenticating your users.
So in short, I don't see a danger here if you are handing out properly constructed policy files and authenticating users before doing so. No need for making copies of stuff.
actually S3 does have a copy feature that works great
Copying Amazon S3 Objects
but as amra stated above, doubling your space by copying sounds inefficient
mybe itll be better to give the object some kind of unique id like a guid and set additional user metadata that begin with "x-amz-meta-" for some more information on the object, like the user that uploaded it, display name, etc...
on the other hand you could always check if the key exists already and prompt for an error

Resources