How to refresh application deployed in Linux machine - linux

I have deployed angular application in Sandbox Linux machine. When replacing data in assets folder those changes are not reflecting in website. Though I am using sudo service restart httpd command.
I am using Putty command prompt and connecting to server via ssh
How can I reflect the changes or recompile code/application using commands?

It's going to depend on how your build/deploy toolchain for angular works.
Basically, httpd reads the files on the filesystem. When you update the files, you don't need to restart the httpd service. It will serve up whatever is there.
However, angular is another story. You're probably on the right path that you probably need to recompile your angular application, but with what you've provided I don't think we can answer that for you, other than to say:
Here are the docs about deploying angular apps: https://angular.io/guide/deployment

Related

Qt6 Installer SDK and Online Installation source

I am trying to create an online installer for a Qt6 application. In this case, it is a Python base GUI app, compiled with Nuitka.
After creating the repository and the installer itself
repogen -p packages repository
binarycreator --online-only -c config/config.xml -p packages Installer
the repo folder is filled and the installer is created. But each time I call it, I get 'Cannot retrieve distant tree'.
The question is: which remote servers are allowed, and how do I access them?
SFTP: debug mode tells me 'unsupported protocol'
FTP: connects, but asks for pw forever. Probably due to secure ftp
Google drive: seems not to work, probably because of redirections?
Nextcloud server: seems not to work, probably because of redirections?
Office365 cloud: seems not to work, probably because of redirections?
Or is the only way to configure an http server to get access to an update folder w/o pw?

How to develop node.js apps with docker on Windows?

I am developing a nodejs app using windows 10 WSL with remote container in visual studio code.
What are the best practices for Dockerfile and docker-compose.yml at this time?
Since we are in the development phase, we don't want to COPY or ADD the program source code in the Dockerfile (it's not practical to recreate the image every time we change one line).
I use docker compose to bind the folder with the source code on the windows side with volume, but in that case, the source code folder and the set of files from the Docker container will all have Root permission.
In the Docker container, node.js runs as node general user.
For the above reasons, node.js will not have write permission to the folders you bind.
Please let me know how to solve this problem.
I found a way to specify UID or GUID, but I could not specify UID or GID because I am binding from windows.
You can optionally mount Node code using NFS in Docker-compose

how to run two instances of couchDB in the same linux physical machine

I would like to have two couchdb server running on my machine.
I have already one instance running installed via this command line
sudo apt-get install couchdb -y
I can run it and stop it via
/etc/init.d/couchdb [start|stop|restart]
how can I have another instance of couchdb running on a different port
OS : linux 16.04
You can use a different configuration file to start a second instance with. This is definitively an advanced topic, as you must take extra care that different instances of couchdb don't share any data, log or configuration files. You find some information about configuration in the CouchDB docs. You could start with duplicating the startup script (/etc/init.d/couchdb) and adapting the folders there, then copying the local.ini from the config folder and changing the data folders, http port and other configuration there.
I used this (quite old) build script to install completely separate copies and found it easier to work with.
But nowadays I would just use Docker and install several CouchDB containers, preferably with the klaemo/couchdb image which is easy to handle.

How to deploy a Docker image to make changes in the local environment?

EDIT +2=Just fyi, i am a root user which means i do not have type out superuser do (sudo) every time i do a authorized only cmd.
Alright so after about 24 hours of researching Docker i am a little upset if i got my facts straight.
As a quick recap, docker serves as a way to write code or configuration file changes for a specific web service, run environment, virtual machines, all from the cozy confines of a linux terminal/text file. This is beyond a doubt an amazing feature: to have code or builds you made on one computer work on an unlimited number of other machines is truly a breakthrough. While i am annoyed that the terminology is wrong with respect to whats containers and what are images (images are save points of layers of code that are made from dockers servers or can be created from containers which require a base image to go off of. Dockerfiles serve as a way to automate the build process of making images by running all the desired layers and roll them into one image so it can be accessed easily.).
See the catch is with docker is that "sure it can be deployed on a variety of different operating systems and use their respective commands". But those commands do not really come to pass on say something like the local environment though. While running some tests on a dockerbuild working with centos, the basic command structure goes
FROM centos
RUN yum search epel
RUN yum install -y epel-release.noarch
RUN echo epel installed!
So this works within the docker build and says it succesfully installs it.
The same can be said with ubuntu by running an apt-cache instead of yum. But going back to the centos VM, it DOES NOT state that epel has been installed because when attempting to run the command of
yum remove epel-release.noarch
it says "no packages were to be removed yet there is a package named ...". So then, if docker is able to be multi-platform why can it not actually create those changes on the local platform/image we are targeting? The docker builds run a simulation of what is going to happen on that particular environment but i can not seem to make it come to pass. This just defeats one of my intended purposes of the docker if it can not change anything local to the system one is using, unless i am missing something.
Please let me know if anyone has a solution to this dilemma.
EDIT +1=Ok so i figured out yesterday what i was trying to do was to view and modify the container which can be done by doing either docker logs containerID or docker run -t -i img /bin/sh which would put me into an interactive shell to make container changes there. Still, i want to know if theres a way to make docker comunicate to the local environment from within a container.
So, I think you may have largely missed the point behind Docker, which is the management of containers that are intentionally isolated from your local environment. The idea is that you create containerized applications that can be run on any Docker host without needing to worry about the particular OS installed or configuration of the host machine.
That said, there are a variety of ways to break this isolation if that's really what you want to do.
You can start a container with --net=host (and probably --privileged) if you want to be able to modify the host network configuration (including interface addresses, routing tables, iptables rules, etc).
You can parts of (or all of) the host filesystem as volumes inside the container using the -v command line option. For example, docker run -v /:/host ... would expose the root of your host filesystem as /host inside the container.
Normally, Docker containers have their own PID namespace, which means that processes on the host are not visible inside the container. You can run a container in the host PID namespace by using --pid=host.
You can combine these various options to provide as much or as little access to the host as you need to accomplish your particular task.
If all you're trying to do is install packages on the host, a container is probably the wrong tool for the job.

Is there a security risk to having node.js installed on a shared server?

I have installed "node.js" on a shared server. I have rename the directory so that it can not be found easily. Also I have my node directory in a location above /public_html.
I have also installed node on my PC for programming and testing easy on my local system vrs my web server.
What I would like to know is does this create a security risk where someone could hack my sites if they knew where my node installation files exist?
I have not added the path to my bash, so the commands have to be executed manually by using the ~ representing home, and the path. Such as:
~/pathtodir/bin/npm -v

Resources