Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I'm trying to deploy a docker project to a remote VPS, I use docker-machine to create a remote instance, but despite setting (I think) local docker environment variables, docker-compose does not build to the remote machine.
I've created a remote VPS via docker-machine create.
I then run eval $(docker-machine env test)
docker-machine active confirms I'm 'on' the remote machine, as does my - now modified - command prompt.
when I run both docker-compose build & docker-compose up I get the following error:
'ERROR: SSL error: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590)' (I've searched, but haven't found how to resolve this.)
instead I prepend sudo, making the commands sudo docker-compose build & sudo docker-compose up
running them both produces no errors, the problem is my containers are spun up locally (docker ps agrees), and not remotely at the ip garnered from docker-machine ip test
I am using ubuntu 16 locally.
Docker version 18.06.1-ce, build e68fc7a
docker-compose version 1.8.0, build unknown
docker-machine version 0.16.0, build 702c267f
Following the suggestion from #BMitch to update docker-compose the problem is resolved. I am now running docker-compose 1.23.2, build 1110ad01 which builds and deploys as expected.
Also, the SSL errors are gone.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I was trying to install antmedia and at the time of running the server i got the error :
System has not been booted with systemd as init system (PID 1). Can't
operate.
These are the steps i followed:
Step 1:
wget https://github.com/ant-media/Ant-Media-Server/releases/download/ams-v2.1.0/ant-media-server-2.1.0-community-2.1.0-20200720_1340.zip
Step 2:
unzip ant-media-server-2.0.0-community-2.0.0-20200504_1842.zip
Step 3:
wget https://raw.githubusercontent.com/ant-media/Scripts/master/install_ant-media-server.sh && chmod 755 install_ant-media-server.sh
Step 4:
sudo ./install_ant-media-server.sh ant-media-server-2.0.0-community-2.0.0-20200504_1842.zip
Step 5:
service antmedia status
After runnning "Step 4" I got the error at the last.
Any Solution for this?
or
Am I doing something wrong?
The error I got:
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
E: Command line option 'y' [from -y] is not understood in combination with the other options.
update-alternatives: error: no alternatives for mozilla-javaplugin.so
update-java-alternatives: plugin alternative does not exist: /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/IcedTeaPlugin.so
System has not been booted with systemd as init system (PID 1). Can't operate.
antmedia: unrecognized service
antmedia: unrecognized service
There is a problem in installing the ant media server. Please send the log of this console to contact#antmedia.io
ScreenShot of Terminal
As I understand, you are using Ant Media Server in Ubuntu environment with Windows system. We recommend using Ant Media Server as a start.sh script instead of a service. The other choice is using Ant Media Server in Virtualbox or Vmware.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I have set up a Linux VM on Azure - Ubuntu Server 14.04 LTS.
My goal is to be able to do remote desktop connection from my Windows 10.
I'm a complete newbie with Linux and that's why I've been following this tutorial. Everything seems to work fine until the point where I need to create an "Standalone Endpoint". The interface has changed in Azure's portal. What I've done is create an Endpoint as the pic below, but when I try to click "Connect" the option is still disabled.
The 3389 port should be set in “Inbound security rules” which you could refer to Where is the EndPoint setting for VM in new Azure portal for details.
After setting the 3389 port you could follow commands below to install desktop and enable RDP connection in your linux VM.
Update your system and install desktop, I choose xfce4 instead.
sudo apt-get update
sudo apt-get install xfce4
Install xrdp and start it
sudo apt-get install xrdp
sudo /etc/init.d/xrdp start
Next add a user that you want to be able to use the Remote Desktop with, the “primech” bit is the username. You get prompted for some other user-type data.
sudo adduser primech
sudo adduser primech sudo
Open Remote Desktop Connection tool in your Windows 10, you could start it by typing command mstsc, and then input your ubuntu's public ip address and then click 'connect' button.
Then you could get the same dialog in your tutorial, just input the username primech and password. And then you are able to see the remote desktop now.
This is correct. RDP (port 3389) is a windows-specific capability. SSH is the default and only supported means of remotely connecting into a Linux VM on azure.
You might be able to configure VNC to run on the Linux box and create an endpoint (VNC uses port 59xx, with xx being the display number, e.g. 00 for :0) but I haven't tried this and not sure it's supported.
RDP (remote desktop) i believe is a Windows only feature. Linux VM doesn`t support this feature.
From my knowledge instead of RDP, you have the options of SSH into a linux VM with command line or use VNC for RDP like experience.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am trying copy a directory from my amazon Linux machine to my virtual-box. I write the following command from my amazon Linux machine:
scp /home/user/test xyz#xyz-VirtuaBox:/home/user
but I get the error message:
Could not resolve hostname xyz-virtualbox: Name or service not found.
I am not sure what's going on. My virtual machine hostname is right.
No! Your virtual machine hostname is not resolvable from amazon linux machine. You should do this the other way round. From virtual machine:
scp xyz#amazon:/home/user/test /home/user
Or the other way is to set up remote port forwarding, so you will be able to connect from your Amazon machine to your virtual box, but it depends if you use Putty or normal ssh. But the general command can look like this:
[local] $ ssh -R 2222:xyz-VirtuaBox:22 amazon
[amazon]$ scp -P 2222 /home/user/test xyz#localhost:/home/user
To make the copy, you need to have an open door to your virtual machine, then use the syntax
scp -pr -P <port> <directory> user#ip:<path_directory_destination>
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I have installed LXC(Linux Containers) on Ubuntu Server 14.4 Host and i have some virtual servers running on it,but now i want to migrate all these containers to LXD, i have worked so hard configuring these containers and i don't want to lose all of these configurations.
This is my sketch:
HOST
Ubuntu Server LXC
Container Container Container
Ubuntu 12 Ubuntu 12 CentOS
Is there any way to do it?
Thanks
As I said in migrating lxc to lxd, you can do so by creating a dummy LXD container and replacing its rootfs, then updating some of the config to match your LXC container's configuration.
Specifically, if your source container was privileged, you'll want to set security.privileged=true at least until such time as you have confirmed your workload works properly unprivileged (just set security.privileged=false and restart the container with "lxc restart").
If you talk about: http://www.ubuntu.com/cloud/tools/lxd I think it's very early to start with it.
I have started to follow this project with first initial commit 3 month before. There wasn't release yet.
PS:
Getting started with LXD
Our OpenStack container capability, codenamed nova-compute-flex is included in Ubuntu OpenStack for Juno, which you can download via the Ubuntu Cloud Archive. Simply type the following commands to enable and use it:
sudo add-apt-repository cloud-archive:juno
sudo apt-get update
sudo apt-get install nova-compute-flex
OpenStack Juno is available for Ubuntu Server 14.04 LTS and 14.10.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
As far as I know, Linux container is different from virtual machine. It's lightweight virtualization technology. So I'm wondering if it can be run on a virtual machine which provisioned by hypervisor like xen, kvm or vmware?
I was trying setup a Linux container(docker + LXC userspace tool) on a virtual machine based on zex. It failed.
[root#docker lib]# service docker start
Starting cgconfig service: Error: cannot mount cpuset to /cgroup/cpuset: Device or resource busy
/sbin/cgconfigparser; error loading /etc/cgconfig.conf: Cgroup mounting failed
Failed to parse /etc/cgconfig.conf [FAILED]
Starting docker: [ OK ]
and if trying to run a container:
root#docker lib]# docker run -i -t ubuntu /bin/echo hello world
lxc-start: error while loading shared libraries: liblxc.so.1: cannot open shared object file: No such file or directory
2014/03/27 14:03:27 Error: start: Cannot start container da0d674d3e31a7c36a9e352f64fd84986cbb872e526cb2dd6adb7473d4f5a430: exit status 127
Actually, I followed a blog to do, the author made it, while I screw it.
Any one can explain that? Or simply tell me it can not be ran on a virtual machine. Really appreciate.
Yes, it can. If your VM's operating system supports the appropriate filesystems, and have containers. I suggest you go though as suggested on https://www.docker.io/gettingstarted/ and use a recent Ubuntu release, since that is known to work.