When I upload a picture, the file is successfully saved and the path is successfully set. But the uploaded image is not displayed immediately after the form submit. Only when I reload the page, the uploaded image is displayed.
I'm saving the uploaded file as below:
InputStream is;
try {
File file = new File("C:\\****\\*****\\Documents\\NetBeansProjects\\EventsCalendary\\web\\resources\\images\\uploadPhoto.png");
is = event.getFile().getInputstream();
OutputStream os = new FileOutputStream(file);
setUserPhoto("\\EventsCalendary\\resources\\images\\"+file.getName());
byte buf[] = new byte[1024];
int len;
while ((len = is.read(buf)) > 0) {
os.write(buf, 0, len);
}
os.close();
is.close();
} catch (IOException ex) {
System.out.println(ex.getStackTrace());
}
Why is the uploaded image only displayed after reloading the page and how can I solve this?
You're writing the file straight into the IDE's project folder and your intent seems to save the file in the webapp's deploy folder. This is a bad idea and well due to the following 3 main reasons:
Changes in the IDE's project folder does not immediately get reflected in the server's work folder. There's kind of a background job in the IDE which takes care that the server's work folder get synced with last updates (this is in IDE terms called "publishing"). This is the main cause of the problem you're seeing.
In real world code there are circumstances where storing uploaded files in the webapp's deploy folder will not work at all. Some servers do (either by default or by configuration) not expand the deployed WAR file into the local disk file system, but instead fully in the memory. You can't create new files in the memory without basically editing the deployed WAR file and redeploying it.
Even when the server expands the deployed WAR file into the local disk file system, all newly created files will get lost on a redeploy or even a simple restart, simply because those new files are not part of the original WAR file.
You need to write it to a fixed path outside the project/deploy folder instead. For example, /var/webapp/uploads. Then, to get it to be served by your webapp, just add it as a new web application context to the server.
Based on your previous question, I know that you're using Glassfish 3.1. In this server, it's called a "virtual host". You can configure it at server level in the admin console at http://localhost:4848 > Configuration > HTTP Service > Virtual Servers, or at webapp level by adding the following line to the /WEB-INF/glassfish-web.xml (your IDE should have autogenerated one; note that this file is before Glassfish 3.1 called sun-web.xml, so if you're seeing manuals/blogs/tutorials referencing it, yes it's exactly the same file):
<property name="alternatedocroot_1" value="from=/uploads/* dir=/var/webapp" />
Either way, you should then be able to use http://localhost:8080/contextname/uploads/* to serve those uploaded images from by <img> the usual way.
See also:
How to upload files to server using JSP/Servlet?
Recommended way to save uploaded files in a servlet application (contains a Tomcat configuration example)
Reading/writing a text file in a servlet, where should this file be stored in JBoss? (contains JBoss configuration example)
Simplest way to serve static data from outside the application server in a Java web application
Related
I have a Blazor Server 6.0 app where I have links to download .msg files.
I have setup IIS to serve that mime-type trying both application/octet-stream and application/vnd.ms-outlook (and restarting IIS)
I have also tried to put in web.config the staticcontent tag like suggested here:
.msg file gives download error
And obviously in my program.cs I have app.UseStaticFiles();
I try to put the .msg in a non-blazor app and they work ok, so I think is not IIS related
So why I cannot download (or open automatically in outlook) this type of file, while other (docx, pdf, zip, etc.) are Ok ?
ASP.NET Core -- on the server side -- also needs to know about the files it has to serve. You can enable serving all unknown file types (I'd rather not include the relevant code as it is a major security risk), or you can add you own additional mappings like so:
var provider = new FileExtensionContentTypeProvider();
provider.Mappings[".msg"] = "application/vnd.ms-outlook";
// app.UseStaticFiles();
app.UseStaticFiles(new StaticFileOptions()
{
ContentTypeProvider = provider
});
More info in the official docs: https://learn.microsoft.com/en-us/aspnet/core/fundamentals/static-files?view=aspnetcore-7.0#fileextensioncontenttypeprovider
Additionally, Blazor Server registers custom options for serving static files (like .server.js, which is different from just .js). It's not directly exposed as a public API to configure, but you can look at the source here as to what the AddServerSideBlazor extension method actually does. The solution there relies on you calling UseStaticFiles without explicitly specifying the options, so that it can retrieve the StaticFilesOptions instance from DI.
Armed with this knowledge, you can override an already configured options instance as follows:
builder.Services.PostConfigure<StaticFileOptions>(o =>
{
((FileExtensionContentTypeProvider)o.ContentTypeProvider).Mappings[".msg"] = "application/vnd.ms-outlook";
});
This configures the already initialized options instance registered in the DI (after all other configurations happened on it, thus PostConfigure).
Note that if you would for whatever reason decide to use a different IContentTypeProvider, the unsafe cast above would need to be revised as well.
I'm working on an app that instead of a database uses file system in the server's root directory. It's basically a note application that allows me to save notes. Each note is a serialized object of Note class represented by following structure \Data\Notes\MyUsername\Title.txt
When I'm testing this on localhost through IIS Express everything works fine and I can easily go step by step there.
However, once I publish the app to Azure, the folder structure is still there (made a test Controller that uses Directory.GetFiles() and .GetDirectories() to simulate folder browsing so I'm sure that the files are there) but the file simply doesn't get loaded.
Loading script that's being called:
public T Load<T>(string filePath) where T : new()
{
StreamReader reader = null;
try
{
reader = new StreamReader(filePath);
var RawDB = reader.ReadToEnd();
return JsonConvert.DeserializeObject<T>(RawDB);
}
catch
{
return default(T);
}
finally
{
if (reader != null)
reader.Dispose();
}
}
Since I can't normally debug the app on Azure I tried to dump as much info as I can through ViewData and even there, everything looks okay and the paths match, but the deserialized object is still null, and this is only when trying to open an existing note WITHOUT creating a new one first (more on that later)
Additionally, like I said, those new notes get saved in the folder structure, and there's a Note sidebar on the left that allows users to switch between notes. The note browser is nothing more but a list that's collected with a .GetFiles() of that folder.
On Azure, this works normally and if I were to delete one manually it'd be removed from the sidebar as well.
Now here's the kicker. On localhost, adding a note adds it to the sidebar and I can switch between them normally.
Adding a note on Azure makes all Views only display that new note regardless of which note I open and the new note does NOT get stored in the structure (I don't know where it ended up at all!) even though the path is defined at that point normally and it should save just like it does on localhost.
var model = new ViewNoteModel()
{
Note = Load<Note>($#"{NotePath}\{Title}.txt"), //Works on localhost, fails on Azure on many levels. Title is a URL param.
MyNotes = GetMyNotes() //works fine, reads right directory on local and Azure
};
To summarize:
Everything works fine on localhost, Important part doesn't work on Azure.
If new note is not created but an existing note is opened, Correct note gets loaded (based on URL Param) on Localhost, it breaks on Azure and loads default Note object (not null, just the default constructor data since it's required by JsonConvert)
If a new note is created, you'll see it on Localhost and you'll be able to open all other notes regardless, you will see only the new note on Azure regardless of note picked.
It's really strange and I have no idea what could cause this? I thought it had something to do with Azure requests being handled differently so maybe controller pushes the View before the model is initialized completely but that doesn't make sense since there's nothing async here.
However the fact that it loads a note that doesn't exist on the server it's even more apsurd and I have no explanation for that.
Additionally this issue is not linked with a session. I logged in through my phone and it showed the fake note there as well right away.
P.S. Before you say anything about storage, please note this. Our university grants us a very limited Azure subscription. Simple lowest tier App service and 5DTU SQL server and 99% of the rest is locked out of our subscription. This is why I'm storing stuff on the server, not because I believe it's the smart thing to do.
I am new to lotus and I am trying to pull the properties data from the server in Java Agent and I am getting error that file is not found and one thing is if I use ff.load method it is giving error as cannot access properties file.
Can any one tell what I have to add it in this code such that it would work correctly. I have corrected restriction rights also. and select the second option
Java Code.
InputStream con = session.getClass().getResourceAsStream("C:/pqrs.properties");
if(con== null)
{
System.out.println("FLAG FILE NOT FOUNDFIND NOT FOUND!!!!!!!!!!!");
}else
{
System.out.println("FLAG FILE FOUND !!!!!!!!FIND FOUND");
}
Properties ff = new Properties();
ff.load(con);
The OS's filesystem security settings are probably not allowing the Domino server to read files from the root directory of the C: drive.
Try moving the file into a location that you know for sure that the Domino server can read. E.g, the C:\IBM\Lotus\Domino\Data folder - or whatever the equivalent location is in your Domino server's configuration.
DLFileEntryLocalServiceUtil.getFileAsStream(...) not working in my server I have s3 configuration for document and media. This method perfectly working without any issue in local server where s3 not configured. Does this require any extra configuration.
I basically want to convert DLFileEntry into File so I had used another method.
File file = DLFileEntryLocalServiceUtil.getFile(user.getUserId(),dlFileEntry.getFileEntryId(), dlFileEntry.getVersion(), true);
Is there a service that creates basically a one-time download of a file, preferably something I can use from NodeJS?
I've done some research on FilePicker, and haven't found anything about regenerating the link it gives you for a file. There may be a way to do this with NodeJS, but I'm using Meteor at the same time so many Node things probably will conflict.
You could build it with meteor. Using meteor-router with meteorite & use server side routing to deliver the files.
You need a collection to keep track of downloaded files:
Server JS
var downloads = new Meteor.Collection("downloads");
//create a link
downloads.insert({url:"/mydownload.zip",downloaded:false})
Meteor.Router.add('/file/:id', 'GET', function(id) {
download = downloads.findOne(id);
if( download) {
if(dowload.downloaded) {
this.response.send("You've already downloaded me")
}
else
{
//I guess you could just redirect or stream the file for an extra layer of surety
this.response.redirect(download.url);
}
}
});
On the client you can use /files/{{_id}} with _id of the file from downloads the person has as the link
My recommendation would also be to add custom server-side logic to count # of uploads (or just flag a file as downloaded/not downloaded) and respond accordingly. The closest you could do with Filepicker.io would be using the security policies to restrict downloading the file to a specific time interval.
in addition to using the router package
in Meteor.startup you can add
var require = __meteor_bootstrap__.require;
fs = require( 'fs' );
the fs variable should be declared on the server only. the fs package is used by Meteor and does not need to be added separately.
once you have done this, you can create files with Meteor.uuid() as their name which makes them unique and very difficult to guess. It is also possible to delete the file after a certain amount of time by using Meteor.setTimeout
the question is: where do the files to be downloaded come from?
Solution using Heroku Cloud and NodeJS Meteor Hooks
Heroku in particular is actually great for temporary file download links: they offer a "temporary scratchpad" filesystem that is reset every time the program restarts, and each running Node server cannot see the files other instances have created.
Each dyno gets its own ephemeral filesystem, with a fresh copy of the
most recently deployed code. During the dyno’s lifetime its running
processes can use the filesystem as a temporary scratchpad, but no
files that are written are visible to processes in any other dyno and
any files written will be discarded the moment the dyno is stopped or
restarted.
Taken from the Heroku documentation: https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
Thus, any files written to the "filesystem" will be temporary.
This allows for a very easy solution to this problem: you can simply use NodeJS filesystem manipulation to create temporary files on the server, serve them once (or for a limited time), and then remove them so they cannot be downloaded again.
This in combination with something like $.download() will make a seamless experience which in turn prevents unauthorized downloads.