I have a form with two fields:
File upload field
Single-line field
I want to save the URL of the uploaded file in the single-line text field. I will also have this address for the file storage because I need that address. How should I do this?
To do this, I first created a new storage path using the gform_upload_path function, but I don't know how to insert this new path into the single-line text field.
please help me and write the code
Related
I have a JSON document having some properties including attachment(abc.txt).
How to save attachments in couchbase using Node.JS?
Take a look at the Couchbase Node.JS SDK
You have two choices in how to store the attachment:
Inline with the document. If the attachment is small and/or easily embedded into JSON (the attachment is valid JSON itself) then you can simply add a element to your JSON document containing the attachment body, for example:
{attachment_name: "abc.txt",
attachment_body: "The quick brown fox jumps over the lazy dog\n"
...
}
As a separate document, referenced from the first. If the document is large / you don't want to serialize it inline, then create a field which just refers to the key of the actual attachment:
{attachment_name: "abc.txt",
attachment_ref: "attachment::document1_attachment1"
...
}
Then you'd have a second document named attachment::document1_attachment which was the actual attachment document.
I'm trying to save a XML file in DocumentDb in Json format. I have no problem in converting it and saving. All the conversion and saving are working fine.
However when I store my Xml file the DocumentDB provides its own file name to the file. Eg; 8756b9b9-41ac-47ca-9f4c-abec60b318be.
But I want to save the file with my own custom name Eg; MyXmlFile1 or MyXmlFile2 etc;
How do I pass my custom name when saving the file? That is MyXmlFile1 or MyXmlFile2.
"jsontoStore" has the content of the file I want to store into DocumentDB.
Code1: Storing without passing any file name.
await client.CreateDocumentAsync(documentCollection.SelfLink, jsontoStore);
Code2: Storing with custom file name but here I'm not able to pass my content that is "jsontostore"
document = await client.CreateDocumentAsync(documentCollection.SelfLink,
new Document
{
Id = "sample1"
});
I believe you are referring to the id of a document, since you mentioned the auto-generated GUID.
All documents must have an unique value in the id property (it is the primary key for the document). If an id is not provided at the time of document creation, DocumentDB will generate one for you.
The solution here is to simply set the id field with the name of your choice (e.g. MyXmlFile1, MyXmlFile2) after you convert your XML to JSON.
I have stored a .txt file in mongodb using gridFS with node.js.
Can we store .pdf and other format? When I tried to store .pdf and retrieve the content on the console, it displays text in the doc and some junk values in it. I used this line to retrieve "GridStore.read(db,id,function(err, fileData)"
Is there any other better way to do it?
Can we do text search on the content in the files stored in mongodb directly? If so how can we do that?.
Also can you please tell where the data of files stored in mongodb and in what format?
Any help in this regard will be great.
--Thanks
What you really seem to want here is "text search" capabilities, which in MongoDB requires you to simply store the "text" in a field or fields within your document. Putting "text" into MongoDB is really very simple, as you just supply the "text" as the content for the field and MongoDB will store it. The same goes for other data of just about any type which will merely just be stored under the field your specify.
The general case here is that you really seem to want "text search" and for that you must store the "text" of your data. But before implementing that, let's talk about what GridFS actually is and also what it is not, and how it most certainly is not what you think it is.
GridFS
GridFS is not software or a special function of MongoDB. It is in fact a specification for functionality to be implemented by available drivers for the sole intent of enabling you to store content that exceeds the 16MB BSON storage limit.
For this purpose, the implementation uses two collections. By default these are named fs.files and fs.chunks but in fact can be whatever you tell tour driver implementation to actually use. These collections store what is indicated by those default names. Being the unique identifier and metadata for the "file" and the other collection storing the
Here is a quick snippet of what happens to the data you send via the GridFS API as a document in the "chunks" collection:
{
"_id" : ObjectId("539fc66ac8b5e6dc058b4568"),
"files_id" : ObjectId("539fc66ac8b5e6dc058b4567"),
"n" : NumberLong(0),
"data" : BinData(2,"agQAADw/cGhwCgokZGJ....
}
For context, that data belongs to a "text" file I sent via the GridFS API functions. As you can see, despite the actual content being text, what is being displayed here is a "hashed" form of the raw binary data.
This is in fact what the API functions do, by reading the data that you provide as a stream of bytes and submitting that binary stream, and in manageable "chunks", so in all likelihood parts of your "file" will not in fact be kept in the same document. Which actually is the point of the implementation.
To MongoDB itself these are just ordinary collections and you can treat them as such for all general operations such and find and delete and update. The GridFS API spec as implemented by your driver, gives you functions to "read" from all of those chunks and even return that data as if it was a file. But in fact it is just data in a collection, in a binary format, and split across documents. None of which is going to help you with performing a "search" is this is neither "text" or contained in the same document.
Text Search
So what you really seem to want here is "text search" to allow you to find the words you are searching for. If you want to store "text" from a PDF file for example, then you would need to externally extract that text and store in documents. Or otherwise use an external text search system which will do much the same.
For the MongoDB implementation, any extracted text would be stored in a document, or possibly several documents in order for you to enable a "text index" in order to enable the search functionality. Basically you would do this on a collection like this:
db.collection.ensureIndex({ "content": "text" })
Once the field or "fields" on your documents in your collection is covered by a text index then you can actually search using the $text operator with .find():
db.collection.find({ "$text": { "$search": "word" } })
This form of query allows you to match documents on the terms you specify in your search and to also determine a relevance to your search and "rank" the documents accordingly.
More information can be found in the tutorials section on text search.
Combined
There is nothing stopping you from in fact taking a combined approach. Here you would actually store your orginal data documents using the GridFS API methods, and then store the extracted "text" in another collection that was aware of and contained a reference to the original fs.files document referring to your large text document or PDF file or whatever.
But you would need to extract the "text" from the original "documents" and store that within the MongoDB documents in your collection. Otherwise a similar approach can be taken with an external text search solution, where it is quite common to provide interfaces that can do things such as extract text from things like PDF documents.
With an external solution you would also send the reference to the GridFS form of the document to allow this data to be retrieved from any search with another request if it was your intention to deliver the original content.
So ultimately you see that the two methods are in fact for different things. You can build your own approach around "combining" functionality, but "search" is for search and the "chunk" storage is for doing exactly what you want it to do.
Of course if your content is always under 16MB, then just store it in a document as you normally would. But of course, if that is binary data and not text, it is no good to you for search unless you explicitly extract the text.
I use ElasticSearch to index resources. I create document for each indexed resource. Each resource can contain meta-data and an array of binary files. I decided to handle these binary files with attachment type. Meta-data is mapped to simple fields of string type. Binary files are mapped to array field of attachment type (field named attachments). Everything works fine - I can find my resources based on contents of binary files.
Another ElasticSearch's feature I use is highlighting. I managed to successfully configure highlighting for both meta-data and binary files, but...
When I ask for highlighted fragments of my attachments field I only get fragments of these files without any information about source of the fragment (there are many files in attachment array field). I need mapping between highlighted fragment and element of attachment array - for instance the name of the file or at least the index in array.
What I get:
"attachments" => ["Fragment <em>number</em> one", "Fragment <em>number</em> two"]
What I need:
"attachments" => [("file_one.pdf", "Fragment <em>number</em> one"), ("file_two.pdf", "Fragment <em>number</em> two")]
Without such mapping, the user of application knows that particular resource contains files with keyword but has no indication about the name of the file.
Is it possible to achieve what I need using ElasticSearch? How?
Thanks in advance.
So what you want here is to store the filename.
Did you send the filename in your json document? Something like:
{
"my_attachment" : {
"_content_type" : "application/pdf",
"_name" : "resource/name/of/my.pdf",
"content" : "... base64 encoded attachment ..."
}
}
If so, you can probably ask for field my_attachment._name.
If it's not the right answer, can you refine a little your question and give a JSON sample document (without the base64 content) and your mapping if any?
UPDATE:
When it come from an array of attachments you can't get from each file it comes because everything is flatten behind the scene. If you really need that, you may want to have a look at nested fields instead.
On SalesForce ,
I've got a word document as an attachment of a custom object, i can get it as blob by selecting the body of the attachment with a SOQL query :
Attachment att = [ SELECT Body FROM Attachment WHERE PARENTID = '**' and ContentType='application/msword'] ;
Blob b = att.body ;
I tried to use the b.toString() function to have the content, but it didn't work.So is there any other way to convert the blob into a string that represent the text written in my word document.
thanks
Document bodies are saved as Blobs and are base64Encoded. Please use the EncodingUtil class and bas64Encode/base64Decode methods to achieve the desired results.
Documentation: http://www.salesforce.com/us/developer/docs/apexcode/Content/apex_classes_restful_encodingUtil.htm
What exactly are you trying to achieve with this?
If you are trying to display the doc. content and let the user edit/save it. This is not possible unless ActiveX controls are used which is another different level.
Please post the code if any coding help is required!
The b.toString() method should return a string of the blob. But keep in mind that it isn't translating the proprietary format of the word document into plain text. It's still going to be a string with some ugliness because it represents the word document format and not the text you would see when viewing from word.