I have an interesting (well to me anyways) problem here. I am doing a code based mass upload of documents into a SharePoint library with a twist. The twist is that the file's last modified date needs to be maintained in the item properties after the upload.
The file details are determined from a csv file that contains the path and the metatdata that needs to be attached to the library items as the library is based on a custom content type with metadata fields.
The problem I am having is that the modified date is only being maintained with the last file uploaded. All other files indicate the upload time. I did a little more digging and turned on version control. It turns out that files that I have uploaded are being modified multiple times as they have multiple minor versions applied to them, while the final file being uploaded has only one. For example. I am uploading 6 files to test out the process. The last file that is uploaded is sitting at version .1, the other files range from version .6 to .8 to .12. All files you can see had the correct modified date at the initial setup upon upload, but immediately a change occurred that modified the date and incremented the new modified date.
Version History:
I have tried an date using the following methods:
spFile.Item.UpdateOverwriteVersion()
item.Update() (when accessing items as a list and not from a file object
spFile.Item.SystemUpdate(false)
None of these updates seem to do what I wish.
Try
Using site As SPSite = New SPSite(siteURL)
Using Web As SPWeb = site.OpenWeb()
Dim itemList As SPList = Web.Lists("DM Import Test")
For Each item As SPListItem In itemList.Items
If item.Name = importFile.fileName Then
item(SPBuiltInFieldId.Modified) = importFile.modifiedDate
item.Update()
End If
Next
End Using
End Using
importFile is just a custom object with the properties I need.
Please let me know if you have come across this and if you happen to have a resolution.
I also tried building the entry with:
spFile = uploadFolder.Files.Add(Path.GetFileName(importFile.FullPath), fileStream, createdBy, modifiedBy, created, modified)
Where the uploadFolder was an folder object of the libary.
Thank you.
Related
I am working on reading excel data in portlet and putting it under Web Content and sorting them into different folders and subfolders.
All I found is creating files and folders under the Documents and Media library but not under the Web Content
https://help.liferay.com/hc/en-us/articles/360029045451-Creating-Files-Folders-and-Shortcuts
https://help.liferay.com/hc/en-us/articles/360028725672-Creating-Folders
Follow these steps to create a folder with the DLAppService method addFolder:
Get a reference to DLAppService:
#Reference
private DLAppService _dlAppService;
Get the data needed to populate the addFolder method’s arguments. Since it’s common to create a folder with data submitted by the end user, you can extract the data from the request. This example does so via javax.portlet.ActionRequest and ParamUtil:
long repositoryId = ParamUtil.getLong(actionRequest, "repositoryId");
long parentFolderId = ParamUtil.getLong(actionRequest, "parentFolderId");
String name = ParamUtil.getString(actionRequest, "name");
String description = ParamUtil.getString(actionRequest, "description");
ServiceContext serviceContext = ServiceContextFactory.getInstance(
DLFolder.class.getName(), actionRequest);
Call the service reference’s addFolder method with the data from the previous step:
Folder folder = _dlAppService.addFolder(
repositoryId, parentFolderId, name, description,
serviceContext);
Please let me know or guide me on how to solve this problem.
Thanks in advance.
In the Document Library, you can upload and store documents.
Under Web Content, you can create and store web content articles.
You sound like you want to store documents (e.g. excel files) under Web Content, and that's not what those folders are being built for. Note that this is not a file system, but those are distinct ways to organize the different types of content.
So, the way to solve this problem is to go back to the requirements, and find a working technical solution for the underlying business problem.
Any folder you create through an API with the DL prefix will appear under the Document Library. Any folder you create through an API starting with a Journal prefix will appear under Web Content. There simply is no overlap.
I am trying to create a dependency pipeline for files before executing my model refresh (web activity) I want to make sure all the related files are there in their respective folders and all files are latest.
Suppose, my model refreshes uses the following file present in adls-
myadls/raw/master/file1.csv
myadls/raw/dim/file2.csv
myadls/raw/dim/file3.csv
myadls/raw/reporting/file4.csv
We need to compare the files last modified with today's date. If both are equal then files are the latest. If any of the files is not the latest then I need an email with the file name that is not the latest and I shouldn't trigger my web activity which usually does model refresh.
I have created this pipeline using get metadata, for each activity, If-condition, web activity, and Set variable activity. But the problem is I am not able to get an email for the file which is not the latest.
How I can get an email for a file which is not the latest file according to my scenario?
Note, the above folders can have more than 100 files, but I am only looking for specific files I am using in my model.
We use SendGrid API to send emails at my company.
You can easily pass the FileNames in the body of the email using any email API out there. You can write the FileNames to a variable then reference the variable in the body. It sounds like you have built almost everything out, so within your ForEach, just have an Append to Variable step that writes a new value to your array variable. Then you can use those array values in your SendEmail Web Activity, or use a string conversion function, there are many ways to do it.
I will update this post with example later
As per your current arch ,you can create variables per foreach activity that would store the file name .
So within foreach activity, in case if the file is not latest using append variable activity
you can save all file names.
and then in the final validation, you can concat all for each loop variables to have the final list of files that are not modified.
But ideally I would suggest the below approach :
Have the list of files created as a lookup activity output.
Provide that to a single foreach activity in sequential execution.
within foreach via IF activity and getmeta data activity, check whether the file is latest or not.
If not via append variable activity append the file name.
Once out of foreach, via If condition check whether the file name variable is blank or has some values.
If it has values, then you can send an email and the filename variable has all the non updated file names
On Parse.com, in a Class I have one column which is AUDIO data.
My question is:
Does the AUDIO data gets completely removed automatically when I delete the ROW (using a Cloud function)?
Or do I need to do something special (before or after) to clean off the AUDIO data?
To make the context clearer here is the kind of code I use to upload the sound data:
PFFile *parse_Sound;
NSData *soundData = [NSData dataWithContentsOfURL:myURL];
parse_Sound = [PFFile fileWithName:#"VOICE"
data:soundData];
[parse_Sound saveInBackgroundWithBlock:^(BOOL succeeded, NSError *error) {
……….
}];
and later:
parse_Object = [PFObject objectWithClassName:#"PeraSentence"];
[parse_Object setObject:parse_Sound forKey:#"AUDIO"];
………
When you delete a row, the files are not deleted. You need to go in Settings > General and click to "Clean Up Files" button.
All files are not referred by pointer in your database are removed.
As you know, files can be referenced to from file-type columns in your
objects. They can be pointed to in this manner by one or many
different objects, and so they are not deleted automatically when any
of the objects that refer to them are deleted. The Clean Up job will
delete any files that have no such references to them.
As a safeguard, any files uploaded in the previous hour won't be
deleted, regardless of how many objects point to them. This provides a
grace period to avoid deleting a file that was recently uploaded but
your app has not added a reference to it.
This Clean Up job should be used carefully. If your app is not using
the file-type column to refer to files, and instead is copying the CDN
URL to a string-type column, this will not count as a reference and
the file will be deleted if it has no other file-type pointers to it.
Source: https://parse.com/questions/clean-up-files
It depends how you stored it in Parse. If you stored it as data in a column then it lives and dies with the row. Delete the row you delete the data.
If instead you used a Parse File then your row only contains a pointer to the file, delete the row and you delete the pointer, but the file is still there. Same concept as pointers to other records.
I have an old Notes Client application. On the form are two RichText fields that hold attachments. JPG's, PDF's, whatever. The document also contains a unique key and other meta-data.
What I want to do is migrate from having multiple attachments on a document to a new document for each attachment. I've never done much with embedded objects and even less with MIME.
I'm currently working in XPages Java but could go to LotusScript if need be.
I was working with this snippet:
List<EmbeddedObject> docPicture = this.getFileAttachments(doc, "picture");
List<EmbeddedObject> docPDF = this.getFileAttachments(doc, "pdf");
for (EmbeddedObject eoPic : docPicture) {
picCount++;
Document newDoc = currentDatabase.createDocument();
newDoc.replaceItemValue("form", "fm_file");
newDoc.replaceItemValue("uploadToken", doc.getItemValueString("barCodeHuman"));
newDoc.replaceItemValue("fileName", eoPic.getName());
newDoc.replaceItemValue("size", eoPic.getFileSize());
fileName = eoPic.getName();
fileType = fileName.substring(fileName.length() - 3);
newDoc.replaceItemValue("type", this.getMIMEType(fileType));
// Extract Attachment and Add To Attachment Document
InputStream attachInputStream = eoPic.getInputStream();
Stream attachStream = session.createStream();
attachStream.setContents(attachInputStream);
MIMEEntity attachField = newDoc.createMIMEEntity("attachment");
MIMEHeader attachHeader = attachField.createHeader("content-disposition");
attachHeader.setHeaderVal("attachment;filename=\"" + eoPic.getName() + "\"");
attachField.setContentFromBytes(attachStream, this.getMIMEType(fileType), MIMEEntity.ENC_IDENTITY_BINARY);
Note I'm using the OpenNTF API but could go back to the lotus objects if need be.
Anyway - this almost worked. I got my documents - 1 per attachment. But when going into the field "attachment" in the document propertied it's not a RichTextField it's a MIME something. that's causing me probably with the next phase of my project. The RichTextDocuments work fine but not the MIME ones.
this is a 1 time migration need so any thoughts on how I can end up with RichTextFields would be appreciated. Thanks!!
try to not involve mime entities at all.
as Oliver said, check your target richText field on the form does not have the 'store contents as mime' checked.
you could even use a richText lite field and restrict it to attachments.
I think you might be using the MIMEEntity method setContentsFromStream because you want to directly move the attachment from doc to doc?
if you want to move using just RichText embedded objects (no mime entity involvement) you need to extract the embeddedObject using .extractFile to the file system first.
Then using the RichTextItem that you create on the new doc (instead of create mime entity) you can use rti.embedObject to attach the file you extracted. (probably best to delete the temporary extract file after successful migration), see the designer help for an example of the parameters required for embedding attachments.
when extracting the file to file system you could extract it to the JVM's temporary directory, the file on the file system needs to have the same file name that you want it to have when attached to the new document.
for this reason you can't really use File.createTemporaryFile() because your temp file name will have random characters in it. instead you
you can get the temp directory with
System.getProperty("java.io.tmpdir")`
and the use that in your extract filepath.
another thing to check before starting processing, is the current notesSession's isConvertMIME setting, if to source field is mime, session.isConvertMIME == true will convert the field to richText when loading the doc. I think in xpages it is false by default, though I don't think it will affect you because I think your source attachments are already richText but for someone reading this and using mime source field it would be important to note. also if you change this using setConvertMIME, be sure to change it back to what it was when you finish your processing.
I'm going to make a document library with SharePoint 2010. I'm handling files that would be updated frequently and many meta data fields such as update frequency, filtering flag, monitoring flag, etc.
If SharePoint stores the files in the database, I think, it's possible to update the file content field alone without touching other fields containing meta data. But apparently it's not easy at the first glance.
Any simple way to update document content in the MOSS environment? I guess that check-out the file and update or edit then check-in is a possible solution but requires too much works for the end users.
You could disable check-in/check-out on the document library
Or you could do something like this in a web part
SPFile file = web.GetFile(url);
file.UndoCheckOut();
string rawdata = Path.Combine(rootdirectory, url);
byte[] data = File.ReadAllBytes(rawdata);
file.CheckOut();
file.SaveBinary(data);
file.Update();
file.CheckIn("some thing");
file.Approve("some thing");