Docker Document Server "Download Failed" - node.js

I'm trying to test OnlyOffice locally. I'd like to open .docx files on my website.
I was quite confused, but I think I dont need Community Server for this, right?
The Document Server looks fine. Even when I set the volumes.. For now, lets use this:
sudo docker run -i -t -d -p 80:80 --restart=always onlyoffice/documentserver
The Document Server runs fine. When I open http://localhost I see "The Document Server is running..."
Then I tried to follow the Node.js example instructions
https://api.onlyoffice.com/editors/example/nodejs
The server is running fine. I did changed the config/default.json to have
{
"server": {
...
"siteUrl": "http://localhost"
}
}
The Node.js example also runs fine
But when I create a file... these errors appears:

This error means that Document Server doesn't have access to the storage and the Document editing service cannot upload the file for editing.
The link to the file specified in the document.url should be accessible from the document editing service. (from inside the container)
Both errors are describe here

Related

Web application no longer restarts automatically on Azure WebApps on TOMCAT server : HTTP 404

My applications were working properly on Azure, but after a Microsoft Docker update, they don't all restart automatically.
I have an HTTP 404:
My analysis :
This application was deployed with Microsoft tools
az webapp deploy ...
in log : /home/DeploymentLogStream/ xxx.log
the application is deployed correctly and it is present on file system
...
"Clean deploying to /home/site/wwwroot/webapps/wholesale"}
"Generating deployment script."}
"Using cached version of deployment script (command: 'azure -y --no-dot
"Running deployment command..."}
"Command: \"/home/site/deployments/tools/deploy.sh\""}
"Handling Basic Web Site deployment."}
....
"Requesting site restart"}
"Requesting site restart. Attempt #1"}
"Successfully requested a restart. Attempt #1"}
"Deployment successful. deployer = OneDeploy deploymentPath = OneDeploy
In Docker Log file I see an update of Microsoft image :
675db21ca06b Extracting 133B / 133B
675db21ca06b Extracting 133B / 133B
675db21ca06b Pull complete
Digest: sha256:932deb9018db39b74249774b4206906424f3fea09b791e9318c43316dc695aff
Status: Downloaded newer image for mcr.microsoft.com/azure-app-service/tomcat:8.5-jre8_220406005815
Pull Image successful, Time taken: 1 Minutes and 11 Seconds
Starting container for site
docker run -d -p 8871:80 --name ..
Container ... initialized successfully and is ready to serve requests.
But in Tomcat log File I only see the default ROOT module and not my Web application.
and indeed my app is not copy from /home/site/wwwroot/webapps/wholesale to /usr/local/tomcat/webapps/wholesale
/usr/local/tomcat/webapps
Redeployment does not change the problem.
Multiple restarts solve the problem but without understanding why.
The workaround is to manually copy the directory from /home to /usr, but next update will remove it.
Someone has an idea to solve the problem ?
Apologies for the inconvenience with this issue and delayed response here.
Users using the auto-update version of Tomcat on Linux that deploy multiple applications in the webapps directory may see their apps return 404.
You may manually change the version to respective working version:
Tomcat 8.5: 8.5.72
Tomcat 9.0: 9.0.54
Tomcat 10.0: 10.0.12
You may also check Azure Service Health history, notification info.
Kindly let us know how it goes.

Onlyoffice or WOPI protocol

I'm looking for a MS Office editor API for my web app to enable users to upload/create word or excel documents, edit them and save them online. I found Onlyoffice and WOPI protocol can provide this but I'm not sure which one works better or easier to develop. I appreciate if you can share your experience.
I'd recommend you use Docker version Community Server to provide the service. Your web app can open webview with url point to the Community Server.
git clone https://github.com/ONLYOFFICE/Docker-CommunityServer.git
Edit docker-compose.workspace.yml and remove all mail stuff if you don't need mails like me.
# Start
docker-compose -f docker-compose.workspace.yml up -d
# Open browser to localhost
# Stop
docker-compose -f docker-compose.workspace.yml down
# Stop and delete volume data
docker-compose -f docker-compose.workspace.yml down -v

Open pdf file created in a docker container(Alpine)

I am trying to open pdf file that I created inside a Docker container. I tried using xdg-open and Firefox but I'm getting the following errors:
www-browser: not found
links2: not found
elinks: not found
links: not found
lynx: not found
w3m: not found
xdg-open: no method available for opening '1.pdf'
I don't know what to do. Please help.
Copy the pdf out of the alpine container with docker cp alpine:/path/to/pdf . and open it on the host.
What you need is Mounting a volume Use volumes
But if:
you want to open the file within your container you can use a VNC
client like Xming and foward your dispaly from the container by passing the DISPLAY variable to the container
you want to open it, just go to the mounted folder an open it with the any pdf viewer application
And with the second option check if the file was well created or not. A kind way to check if the problem is not comming from your file

pywatchdog and pyinotify not detecting changes on files inside ftp created directories

I have an application monitoring files sent to a FTP server (proftpd 1.3.5a). I am using pywatchdog to monitor file creation on FTP server root (app running locally), but under some very specific circumstance it does not issue a notification: when I create a new dir through ftp and, after that, create a file under this directory. The file creation/modification events are not caught!
In order to reproduce it in a simple way I've used pyinotify (0.9.6) itself and it looks like the problem comes from there. So, a simple way to reproduce the problem:
Install proftpd and pyinotify (python3) on the server with default settings
In the server, run the following command to monitor ftp root (recursive and autoadd turned on - considering user "user"):
python3 -m pyinotify -v -r -a /home/user
In the client, create a sample.txt, connect in the ftp server and issue the following commands, in this order:
mkdir dir_a
cd dir_a
put sample.txt
There will be no events related to sample.txt - neither create nor modify!
I've tried to remove the ftp factor from the issue by manually creating and moving directories inside the observed target and creating files inside these directories, but the issue does not happen - it all works smoothly.
Any help will be appreciated!

Website images don't load, but other static files do

I have a problem that confuses me since hours. I have a small website with a chat application on an express server. On localhost there is no problem at all. Images load normally, css and js files ok, everything perfect.
But as soon as I push the code online into IBM-Cloud (ex-bluemix), images give me a 404. The rest of the static files do get served though, and the application otherwise works normally.
The file structure looks like this:
--client
--resources
logo.png
--scripts
loginScreen.js
--stylesheets
stylesheet1.css
index.html
--server
app.js
The server starts in app.js and in the code I've put this before initializing the server:
expressApp.use(express.static(path.join(__dirname, "..", "client")));
I had some small problems with filename casing which I detected after building a docker container, but this is resolved and shouldn't be the problem. Any ideas?
The Simple Cloud Foundation Toolchain was registering my commits in github successfully and triggered the auto-build and deploy.
All code changes were pulled normally by the bluemix server. But the filename changes not. So for example the file called Logo.png was renamed to logo.png locally. This change was pushed normally into github. But logging into the bluemix server with ssh revealed that the filename there remained Logo.png.
I had to change the filenames manually by $ mv and now it works.

Resources