I used GitBlit on windows but I purchased a Synology NAS.
http://gitblit.com (An open source "GitHub like" server)
The NAS works with a custom distribution of linux.
I installed the ubuntu version of GitBlit It works great.
But I can't install the GitBlit service. I would like to start the server at startup.
But I try to launch the install-service-ubuntu.sh but there is no update.rc file on this Linux distribution.
#!/bin/bash
sudo cp service-ubuntu.sh /etc/init.d/gitblit
sudo update-rc.d gitblit defaults
So I would like to install the service manually but don't know the Linux system.
Thanks.
Related
If Elastic Search is always running on port:9200, do I have to start it each time I use it?
I am using Linux, MAC and Windows, in a nut shell I run command "bin/elastic" or another variation based on OS commands and it usually "starts" Elastic Search.
I just want to know why its always running on port:9200 and if I need too start Elastic each time I boot up a Operation System.
you download elasticsearch.zip and unzip it and run with "bin/elasticsearch"
you can download deb (for Debian or Ubuntu Linux) or rpm (for redhat or centos) version and install it as service for example in centos:
sudo wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.0.rpm
sudo rpm -ivh elasticsearch-6.0.0.rpm
sudo systemctl start elasticsearch.service
sudo systemctl enable elasticsearch.service // for running after boot
if you want to change the default port of elastic you must edit the elasticsearch.yml file. you can find this file in the manual running in config/elasticsearch.yml and in installation as service in centos in the /etc/elasticsearch/elasticsearch.yml
you can uncomment this line:
#http.port: 9200
and change port for example:
http.port: 9900
Is it possible to connect to a Linux Docker engine running in a Linux VMWare virtual machine, running on Windows 10? I need VMWare for other tasks... which means I need to disable Hyper-V (which Docker requires). The configuration would be as follows:
Windows 10 Enterprise ... running VMWare Workstation Pro v12
Ubuntu 16.04.2 ... as a guest OS in VMWare
Shared Folders running in VMWare sharing C:\Source
Ubuntu VM mounting C:\Source (as /media/source probably)
Docker Engine running within Ubuntu VM
Docker container with Volume mounted at /media/source
coding locally in C:\Source (using Sublime, Atom, whatever)
changes being picked up by Docker container (via nodemon)
Yes and no.
Can this be done? Yes.
Can this be done with VMWare? I wouldn't with VMWare.
VMWare has issues dealing with Shared Folders on Windows 10. The company behind VMWare switched HGFS drivers and there were problems as late as December 2016, which is when I finally gave up.
Now, if you are willing to do this with VirtualBox, then it works flawlessly:
flackey#devvms01: ~
$ ls
Backup Source
Here's what I currently do...
Disable Hyper-V in Windows 10;
Install VirtualBox & VirtualBox Extensions (link);
Create the Ubuntu VM;
Before starting the VM, add the "Shared Folders" paths you need (see above);
Install Ubuntu;
Install virtualbox-guest-dkms;
Add your user to the vboxsf group: sudo adduser $USER vboxsf; and,
Create the mount point(s).
The commands would be:
sudo apt-get install virtualbox-guest-dkms
sudo reboot now
sudo adduser $USER vboxsf
mkdir ~/Source
sudo mount -t vboxsf Source ~/Source
mkdir ~/Backup
sudo mount -t vboxsf Backup ~/Backup
Note: You probably don't need to reboot. I'm just anal like that.
After that, it works exactly as you described above. You will be working in C:\Source directly in Windows 10. The VM and Docker will function as if the files are local to the VM's file system.
I am trying to set up remote Ubuntu desktop on Azure free tire. I have followed all the steps mentioned in here and Azure Documentation. I have setup instance with resources manager. Setup the rdp. Install xrdp via ssh. Install Ubuntu desktop as well.
Installed -- Ubuntu Server 16.04 LTS
Also installed xfce as mentioned in Azure Documentation
In-spite of installing everything properly I see dotted screen when I connected remotely. What am I doing wrong ?
Using xfce if you are using Ubuntu version later than Ubuntu 12.04LTS
We can follow those steps to install xrdp:
sudo -i
1.Install XRDP Package from Ubuntu Repository
apt-get install xrdp
2.Installing the xfce4 Desktop environment
apt-get update
apt-get install xfce4
3.Configure xrdp to use xfce desktop environment
echo xfce4-session >~/.xsession
4.Restart xrdp service
service xrdp restart
5.Test your xrdp connection:
We can use mstsc to test xrdp connection.
Note:
If you use this command apt-get install xfce4 get this error message:
W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
Please add ARRAY <ignore> devices=/dev/sda to /etc/mdadm/mdadm.conf, like this:
root#ubuntu:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
ARRAY <ignore> devices=/dev/sda
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
More information about xrdp on azure vm, please refer to this link.
I'm developing on OSX using Docker Machine. I used the quickstart terminal to let it create the default VM which is extremely minimal:
In an OS X installation, the docker daemon is running inside a Linux VM called default. The default is a lightweight Linux VM made specifically to run the Docker daemon on Mac OS X. The VM runs completely from RAM, is a small ~24MB download, and boots in approximately 5s.
I want to install dnsmasq, but none of these instructions could work. I expect to come across this kind of problem again, so beyond installing dnsmasq I want to have some tool such as apt-get to be able to easily install things. With so few commands available I don't know how to get started. I have curl, wget, sh, git, and other very basic commands. I don't have any of the following:
apt
apt-get
deb
pkg
pkg_add
yum
make
gcc
g++
python
bash
What can I do? Should I just download a more complete VM such as Ubuntu? My laptop is not very fast so a very lightweight VM was very appealing to me, but this is starting to seem like a bit much.
The docker-machine VM is based on TinyCore. To install extra packages use tce or tce-load, the apt-get counterpart of TinyCore.
A word of warning, you shouldn't treat the docker-machine VM as a regular VM where you install tons of packages and customize. It's only meant to run containers. It's best to keep it that way.
I have a development server which installed ubuntu and I just have normal permission in this server. I want to share a folder with Windows to store the code and compile the code in this server but edit them in Windows. How can I achieve this without root permission?
PS. it seems that samba is installed in that server
In case you have an SSH server running on ubuntu, you can try installing e.g. MobaXTerm in windows and access ubuntu via ssh. If it works, you'll be able to use scp to transfer data efficiently!
Another option is to use rsync in combination with ssh, which can be used from linux without root permission. However, you may have to adjust windows permissions, then.
Best solution, however, is using a version control system as #Filburt mentioned in a comment above.
sudo apt-get install samba libpam-smbpass
sudo service smbd restart
sudo gedit /etc/samba/smb.conf and change the workgroup name -> workgroup = WORKGROUP
sudo service smbd restart
sudo apt-get install winbind
sudo gedit /etc/nsswitch.conf add hosts: files mdns4_minimal [NOTFOUND=return] wins dns mdns4
sudo /etc/init.d/networking restart