OpenLiberty Docker image sample application not workng - azure

I have created my own Docker image using following Dockerfile:-
FROM open-liberty:webProfile8
COPY mysample.war /config/dropins/
COPY server.xml /config/
Docker image got generated with some warning:-
Successfully built cc05c3d94adf Successfully tagged
sampleopenlibty:latest SECURITY WARNING: You are building a Docker
image from Windows against a non-Windows Docker host. All files and
directories added to build context will have '-rwxr-xr-x' permissions.
It is recommended to double check and reset permissions for sensitive files and directories.
I pushed this image to Azure Container Repository and created an App Service out of it but whenever I browse to the Web-App instead of showing my custom/sample web app it shows the out of the box PAGE of Open Liberty.
Can someone please help me what I would have done wrong to fix this issue?
Thank you

I just summarize your comment and post it as an answer.
It was related to the Database connectivity which is in the sample WAR file. We defined some Application settings (Env. vriables) but somehow they are not getting picked up by the app hosted in Docker image. After hard coding it in the code and deployng it everything works.
So now it boils down to fixing the Application settings (Env. varible) to get it all working.

use COPY --chown=1001:0 mysample.war /config/dropins/
to know more open-liberty-docker-docs read Updating file permissions

Related

CrafterCMS has no site configured after initial setup

I downloaded the tar.gz for authoring with profile and social and also without profile and social.
With startup.sh http://localhost:8080/studio shows no site configured. Please configure the site you want to show or select a site on the authoring environment.
A quick look of the deployer logs show No config files found under /crafter/data/deployer/targets. No good documentation is available on this. I tried creating a yaml file after reading https://docs.craftercms.org/en/3.0/system-administrators/deployer/admin-guide.html still it doesn't change the situation.
I haven't tinkered with any configurations I only followed the quick start guide. No login page is also being displayed.
I am using version 3.1.10 and I have 8 gb of ram.
The tomcat log file has quite a lot of characters. I have attached it in gdrive
https://drive.google.com/file/d/1-t37ETNWG94qcMnrXdIfwkHRa1khhtKp/view?usp=sharing
I have also attached the deployer logs
https://drive.google.com/file/d/1A1hNvIdQeMPOVTfTDv_PtU2Xa10r-mwK/view?usp=sharing
From your logs, it seems that you're missing ncurses5:
[ERROR] 2020-11-25T12:05:36,874 [Exec Stream Pumper] [exec.ManagedProcess] | mysql_secure_installation: /home/aditya/Desktop/crafter/bin/dbms/bin/mysql: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory
That breaks the authoring database. To fix this, see the instructions for your Linux distro here:
https://docs.craftercms.org/en/3.1/system-administrators/activities/installing-and-verifying-prerequisites.html#linux-prerequisite

Azure App Service not reflecting file changes from ftp

I am new to Azure and I have created a very simple App Service in Azure with everything default. Changed the App Service Plan to B1. I can browse the app service home page and see the default page. I then connect using FTP and try to change the default page, but it did not reflect changes.
I even downloaded publish profile and published a .net core 3.1 web api with defaults, I can see the files are deployed using FTP but the api is not present. I even deleted the default page but the home page still appears. It seems the ftp is not pointing to default location where files are being picked up by asp.net core.
You can refer my answer in this post. Then use kudu to check whether the time of the last update file via FTP is consistent with the release time. If the file is not updated, of course this update has no effect. Then we can check the FTP connection str.
But first, I suggest you to modify index.html or default interface function and update by kudu. Then check if the update file is effective. If success, I can sure you code is ok.
Second, check your FTP Connection str.
Step 1. Find Deployment Center->FTP, click FTP then you can see Dashboard, into Dashboard find FTPS Endpoint,Username and Password.
Step 2. Use FileZilla, connect it. You can see files in it.
Then you can try again. Under normal circumstances, there is no problem to update via FTP.If the problem is still not resolved, I suggest that you can deploy to local IIS for debugging.
I was facing same problem like, publish contain not displaying when visit website. then i change following settings and it worked.
I had the same issue updating files in FTP and the dlls weren't being updated as they were being used by the site. I had to stop the App Service first and then update the files. The changes then reflected when restarting it.

Visual Studio ClickOnce Web Deployment

I would be most grateful if anyone could help me solve this problem with ClickOnce Web deployment.
I have read all the threads on this subject and I have also read through all the Microsoft documentation on the subject. They seem to say a lot without actually being direct or providing helpful examples. However, perhaps I am wrong and I have not looked in the right places.
I have already used ClickOnce successfully to deploy an application on the local area network.
It works well and really isn't that complicated. However, my goal is to deploy this application to customers, who are not connected to my local network.
I have set up a web site (www.mydomain.co.za), which I can access directly or via the ftp protocol.
I have created a sub directory off the root where I intend to publish the files created by the publish function. The publish function of the application requires a Publishing Folder Location and a Installation Folder URL I don't really understand the functional difference between these two locations. If I set the Publishing Location to ftp://www.mydomain.co.za/MyProductName and the Installation Folder URL to http://www.mydomain.co.za/MyProductName, then the publish process succeeds and when I check on the web server, the files have been published successfully it would seem. A further Application Files/MyProductName subdiectory with the version number information appended was created where all the output was placed.
My next step is to then grab the URL of the setup.exe file and to run it from a browser. This downloads the setup.exe file to my downloads folder which I then try to run but I get an error
Deployment and application do not have matching security zones.>
I have seen this come up in other threads but These threads don't seem to relate directly to what I am trying to do. These threads make mention of using Internet Explorer to achieve some degree of success, but all the browser did was to download the file.
I have also noted with interest that a web page is created in the root with a button that prompts the user to install the application. This does not work either.
Does anyone know of an article that I can read on this subject which is more helpful or if anyone can offer more insights into this I would be very grateful.

How could I prohibit anonymous access to my NodeRed UI Dashboard on IBM Cloud(Bluemix)?

I'm working with node-red, on boilerplate IBM cloud. I know that there is a way, changing the value of enviroments variables(NODE_RED_USERNAME and NODE_RED_PASSWORD), to change username and password of the editor flow. But, what about UI dashboard? I mean using dashboard nodes. Forbid access to
https://noderedservicename.mybluemix.net/ui/
I know that on the code, changing the variable httpNodeAuth on the file settings.js I can do what I want. What is the way for doing that on IBM Cloud?
Thank you in advance!
You need to add the httpNodeAuth (not the httpAdminAuth as this is for controlling access to the Node-RED editor and can done with the environment variables discussed in the other answer.) to the app/bluemix-settings.js file.
Something like this:
...
httpStatic: path.join(__dirname,"public"),
httpNodeAuth: {user:"user",pass:"$2a$08$zZWtXTja0fB1pzD4sHCMyOCMYz2Z6dNbM6tl8sJogENOMcxWV9DN."},
functionGlobalContext: { },
...
Details of how to generate the pass can be found here
There are a number of ways you can edit the file, some of which include linking the Node-RED deployment to a git repository or downloading the whole app, editing the file and pushing it back to Bluemix (when you first deploy Node-RED from the starter pack it gives you instructions on how to download the source to make changes and then push them back. You can get to these instructions by clicking on the "Getting started" link in your Node-RED Bluemix console page).
But the quickest/simplest/dirtiest way is probably to just SSH into the instance and change the file with something like vi. Details on how to ssh to an app instance can be found here. But the following should work:
cf ssh [app name]
Once you have edited the file you will need to tell bluemix to restart the app. You can do this from the web console or with the cf command line tool.
(The changes made by this method will not survive if the app is restaged, or bluemix decides to move your instance to another machine internally because it will rebuild the app from the pushed sources. The permanent solution is to download the source, edit and push back)
This link will help you but it's written in Japanese.
http://dotnsf.blog.jp/archives/1030376575.html
Summary
You can define the "user-defined" environment variables through the IBM Cloud dashboard.
It contains the variables to protect Node-RED GUI.
You have to be set as follows
NODE_RED_USERNAME : username
NODE_RED_PASSWORD : password

Build Error while deploying to App Engine using custom containers

I am using a nodejs based custom deployment on App Engine using google/nodejs-runtime docker image.
I built my code on the localhost via
gcloud preview app run .
and it runs fine. But when i deploy it to my App Engine using
gcloud preview app deploy .
it give this error (along with whole stacktrace):
Build Error: Get https://index.docker.io/v1/repositories/google/nodejs-runtime/images: dial tcp: lookup index.docker.io on 192.168.2.1:53: no answer from server.
This seems to be error while fetching the repository from the internal google repos. Is it?
If yes, why is this and how can i make sure that this doesn't happen
again.
If no, then what is it and how do i solve it?
So this is what happens while deploying to the app engine:
Docker check for Google's repo for the image, as the image file i am using (google/nodejs-runtime) is by google and hosted on Google as well (i guess).
Check for any updates to the docker image and update the image file, this is necessary to always push updated versions to the deployment.
Pull changes and push the image to App Engine server and check if the app is serving.
While my assumption was that the issue was in docker registry on Google's end, it was not so.
So there were actually two issues which were causing the issue:
Primary issue (as mentioned in question's comments by #John-Lowry):
It was of the error was DNS name resolution to connect to the docker servers. This was due to my ISP's DNS issue which was blocking access to the servers. Resolved when changed ISP and also resolved after some time on the ISP #1.
Secondary issue (Not sure if exactly related):
Once i switched ISP the following error occurred
google.appengine.tools.docker.containers.ImageError: Image with tag google/docker-registry was not found
This was resolved once i updated the image from the server. Similar Question And finally the updated app was pushed to the servers.
I am not sure why the error occurred, but this is what i assessed:
The docker registry image (google/docker-registry) was updated at some point before the deployment. While deployment, the checks were performed for the image in use (google/nodejs-runtime) but since nodejs-runtime is dependent on docker-registry (i am guessing for maintaining various access policies), it was necessary to update to the newer image.
Please suggest edits.

Resources