Get file path of uploaded file from p:fileUpload - jsf

I would like to upload one file at a time, get the path of each file and add it to a list. It would later be used to save them all in a permanent directory like E:/myfile/....
I tried the following PrimeFaces component:
<p:fileUpload value="#{fileUploadView.file}" mode="simple" />
However, I am unable to get the file path. How can I get it?

The enduser won't send you the full client side file path. Even then, if it did, how would you ever get the file contents by only the path? If it were possible, it would have been a huge security breach as basically anyone on the world would be able to unaskingly scrape files from anyone else's disk through the entire Internet.
Usually, only the filename is being sent and you should even not use exactly that filename to save the obtained content on disk in order to avoid files being overwritten in case (another) enduser coincidentally uploads a file with exactly same name. You should use it at most as metadata (e.g. to prefill the Save As filename in case enduser want to download it back at a later moment).
You should actually be interested in the actual file content sent to you in flavor of InputStream or byte[], not the original client side path nor filename. The file content is sent only once and you should immediately read and write it to a more permanent location in the server side once the bean action method hits. Then, keep track of the (autogenerated/predefined!) filenames/paths of those saved files in some List<String> or List<File> in the view or perhaps the session scope.
See also:
How to save uploaded file in JSF

Related

How to make an express js temporary route/path for image?

I have created an API with express to submit a user form with an image with multer. Now I want to make the URL of the image temporary for a certain amount of time so that no user can see the image after that particular time and one URL is fixed for one user only but if he wants to see it again he has to request a new URL.
https://localhost:3000/api/image?=sometempURL1
for another user, the same image should have another unique URL like
https://localhost:3000/api/image?=sometempURL2orSomething
just don't want to reveal the actual file path/name
One way to do this is to use routes with parameters: /path/:id
The id would be the path to the file hashed using a hashing method.
You would have 2 locations for the files, one temporary and one permanently. When you get a file, you store it in both locations. You can save the location to the permanent one. For you temporary route, you can return to use the hashed path of the file in temporary location.
Now how is this temporary location? You can use Cronjob to schedule it's deletion.
So you provide to the user /path/to/file.jpg under this format: b017bcade5394d0076ad808e94482576 and the route would look like: /file/b017bcade5394d0076ad808e94482576 and you can get the file using this hashed value, but at some point (depending on the time you set on cronjob) the file will be deleted from the temporary location, so the link will become invalid

p:remotecommand to upload files

I would like to upload file programatically to my jsf application. The user should select a directory on his system, and a js script should loop on any file in dir and send each one to the listener serverside
I cannot use FileUpload, because it cannot select a whole dir with thousands of file, so I was thinking to use jquery and send the file to a remotecommand, but I have no clue to how send the file itself (normally I pass just string)
so I was thinking to use jquery and send the file to a remotecommand, but I have no clue to how send the file itself
Don't go there. It is a bad attempt to a workaround for a bad design choice. You'd most likely run into similar problems and what about the user selecting a lot of files for second time if it fails halfway? It might become slow, might run into browser limits (search for uploading multiple files in plain html)...
If you still want to do it via a webbrowser (which according to one of your other questions you do not want to), maybe try something like https://webdeltasync.github.io/ (disclaimer: I did not use it myself, and there might be similar ones (https://www.google.com/search?q=browser+based+rsync), it is just a hint in which direction to find a real solution)

Which JSF-tag to use to get the path of an Input Directory on the client side?

Is there an input directory for JSF, or library like Omnifaces or Primefaces? I don't need <h:inputFile> as the file actually no need to be uploaded, I would like to provide a field for user to input the path of a file for example:
C:\Folder One\myFile.txt
and ManagedBean will interpret the path, for example, read the content of the file. I don't want user to enter the file path manually in input text. And I don't have the client/server issue as the user will select the file on the machine which my JSF application is deployed.
If such component is not available, is there any workaround? like maybe I can provide a <h:inputFile> and get path from it?

Why do Domino store all my inline images in the xsppers folder?

Every time I reload my webpage new files are added to the c:\windows\temp\notes...\xsppers folder on the server. and these files are never deleted, I have to manually delete them, and it can be several GB of data to delete every month
I have a simple xpage with a repeat control that display data from several documents using a computed Field mapped to a rich text field.
the richtext fields contain a lot of inline images that has been added using the notes client.
Now, every time I reload my webpage these images are now detached to the xspper folder and is causing my harddrive to run out of disc space all the time.
What is the reason for this behaviour and how can I avoid it from happening?
In the image below you see all the gifs that has been created with a new uinque name, each time I reload my webpage a new set up images are added to the folder.
I am using Domino 9
As Egor Margineanu wrote, this can happen if your images are not stored as MIME images in your Rich Text item.
This forces the domino server to detach the attachment(s) over and over again to disc, because it is required to generate a GIF form the inline image. If you change the MIME type of your rich text item in your form and save the document(s) again, the images are stored in the "correct" format, and the domino server is able to identify that the images are already on the HDD.
As far I can see the temorarly detached attachments are not wiped when the session ends. This seems to happen if the application ends.
Not a complete answer but some clarification from the XPages Portable Command Guide, page 36:
The files remain in the temporary persistence location until the user
session expires. The file is not removed after the document is saved,
although it is no longer referenced by URLs.
It may be useful to change this
setting to point to a different location if the folder is taking up
too much space on the main server drive and another drive has more
available space. This option is server-wide, so it should be set in
the server xsp.properties file. Values set in a particular
application’s xsp.properties file are ignored.
Based on your question, Thomas, it seems that this is not what you are experiencing.

What security issues we acquire if we publish a form that lets you upload any type of file into our database?

I am trying to assess our security risk if we allow to have a form in our public website that lets the user upload any type of file and get it stored in the database.
I am worried about the following:
Robots uploading information
A huge increment of the size of the database
The form is an resume upload so HR people will be downloading those files in a jpeg or doc or pdf format but actually getting a virus.
You can use captchas for dealing with robots
Set a reasonable file size limit for each upload
You can do multiple checking for your file upload control.
1) Checking the extension of file (.wmv, .exe, .doc). This can be implemented by Regex expression.
2) Actually check the file header or definition type (ex: gif, word, image, etc, xls). Sometimes file extension is not sufficient.
3) Limit the file size. (Ex: 20mb)
4) Never accept the filename provided by the user. Always rename the file to some GUID according to your specifications. This way hacker wont be able to predict the actual name of the file which is stored on the server.
5) Store all the files out of web virtual directory. Preferably store in separate File Server.
6) Also implement the Captcha for File upload.
In general, if you really mean to allow any kind of file to be uploaded, I'd recommend:
A minimal type check using mime magic numbers that the extension of the file corresponds to the given one (though this doesn't solve much if you are not going to limit the kinds of files that can be uploaded).
Better yet, have an antivirus (free clamav for example) check the file after uploading.
On storage, I always prefer to use the filesystem for what it was created: storing files. I would not recommend storing files in the database (suposing a relational database). You can store the metadata of the file on the database and a pointer to the file on the file system.
Generate a unique id for the file and you can use a 2-level directory structure to store the data: E.g: Id=123456 => /path/to/store/12/34/123456.data
Said that, this can vary depending on what you want to store and how do you want to manage it. It's not the same to service a document repository, a image gallery or a simple "shared directory"

Resources