I've successfully installed Gitlab on a cloud server but every time I try to create a new project it defaults to localhost so if I try to push to it it attempts to push to the the local machine instead of the remote one.
Is there any way to set an IP address for the destination push?
Any help appreciated...
There's a part in your config/gitlab.yml that says something along the line of replace "localhost" with.... And the restart your app.
If that didn't help, post your config.
Related
I'm new to Dokku and nginx server and I'm trying to set up server locally on Linux Ubuntu 20.04 for testing purpose instead of using one such as Hetzner (Cloud) or DigitalOcean (which already contains Dokku image which is older version of Dokku than the one I need) which could cause unwanted financial expenses.
More precisely is I'm trying to setup domain address in my /etc/hosts file, e.g.:
127.0.0.1 localhost
188.x.xx.xxx domain.com
By following this Dokku setup tutorial and passed it one of following commands:
$ dokku domains:set-global localhost
$ dokku domains:set-global domain.com
But once I execute following command (in order to show created djangotutorial app):
$ dokku config:show djangotutorial
... I get this as output:
=====> djangotutorial env vars
NO_VHOST: 1
From there everything I execute normally in mentioned tutorial until I execute following command (to push to remote server):
$ git push dokku main
... I get following message:
ssh: connect to host domain.com port 22: Connection timed out fatal:
Could not read from remote repository.
Please make sure you have the correct access rights and the repository
exists.
Is there a way to achieve this and how by mentioned info?
Any advice/help is greatly appreciated.
Thank you in advance.
OS: Ubuntu 16.04
Hypervisor: VirtualBox
Network configuration: Nat Network with port forwarding to access the vms through the host ip. I can also ping a VM from another VM.
I try to connect my Jenkins app hosted on a VM to my BitBucket server also on a VM. I followed a tutorial on internet but when i enter the address of my git repository i'm getting this:
Failed to connect to repository : Command "usr/bin/git ls-remote -h http://admin#192.168.6.102:8005/scm/tes/repository-test.git HEAD" returned status code 128:
stdout:
stderr: fatal: unable to access 'http://admin#192.168.6.102:8005/scm/tes/repository-test.git/': The requested URL returned error: 403
So, to be sure I tried to exectute the command on the terminal... and on the terminal it seems to work.. I can also push, clone, pull etc..
On this image you can see that it's true
Do you have an explanation?
EDIT:
I try some others things like use or not sudo to see if the permissions problem came from that and it seems that it's not the case.
But I see that there is no result when we use the "HEAD" argument.
Do you think that because "HEAD" give no result, git in jenkins interprets it like no answer and returns the damn** error 403?
EDIT 2:
I found that on the web: http: // jenkins-ci.361315.n4.nabble.com/Jenkins-GIT-ls-remote-error-td4646903.html
The guy has the same problem but in a different way, I will try to allocate more RAM to see if it does the trick.
There could be many possible problems, but you are getting 403 - Access Forbidden, which indicates some problem with permissions. I would suggest first common mistakes:
a) trying https instead http - my scm only uses https,
b) check if admin is correct - scm by default uses scmadmin.
Here I run the exact same command twice.
The first time I used the proxy configuration wich I need to access internet, and the second time I set the mandatory server on "none".
So there is a problem with the damn proxy.
I was thinking that the proxy was not used in NAT connection with VirtualBox...
I found the solution.
I had to reinstall jenkins to have a user named "jenkins" with his own home directory.
I don't know if it is linked or not, but I configured my bitbucket server to use only HTTPS with a self signed certificate (I work in lan)
My troubleshoot was linked with my proxy settings.
I disabled all my proxy settings in Linux so I was able to launch the command that did'nt worked in jenkins with terminal.
I logged with sudo su jenkins the commands also worked.
I found out that in the home directory of the jenkins user there was a "proxy.xml" file. I opened it and saw my old proxy settings.
I deleted all the content with vim, saved and restarted and the error was gone.
there can be git version miss match.....
I would suggest you update git once. maybe it will resolve your issues.
I have recently installed Gitlab on an internal server (192.168.0.XX). After installation I edit the Gitlab.rb file external_ip: 192.168.0.XX and I run the reconfigure file. However when I go to the address on the server I am not served with the Gitlab page. Am I doing something silly?
Don't be an idiot and have Apache running at the same time...
My current setup is that I have an Ubuntu VM with gitlab installed (gitlabVM). Gitlab is running on nginx over port 8888
My router is configured as the following:
External 4322 -> gitlabVM:22
External 8888 -> gitlabVM:8888
If I am at a remote location, how do I connect back to do a git clone?
Ive tried git+ssh://myuseringitlab#routerip:4322/root/repo.git but i get an error that there are no repositories.
the url in gitlab is git#localhost:root/repo.git.
Did you try this ?
git+ssh://git#routerip:4322/root/repo.git
You have to provide the git user. Your Gitlab user credentials are asked at the next step (if you don't use SSH auth).
I have a production build of my site on a VPS, and I deploy to a bare git repo which has a hook that checkouts the commits to an app directory. I use forever to keep my app running from the app directory.
What I want to do is set up a development build which I can push to. The development build could be hosted under a subdomain on my VPS. However, I'll need an authentication step that'll prevent anyone and everyone from accessing the development site. How could I put authentication in front of an entire site with little (if any) changes to my application?
Why don't you just run it on a port that isn't available to the public and then you could create an ssh tunnel and access it via localhost?
Add a dev ssh user to your VPS and assign it a password.
Your ssh tunnel would look like this (just adjust your ports accordingly):
ssh -N -L8808:localhost:8808 user#destination.com
You'll be prompted for your password and then you would leave your terminal session open and go to your dev server via "http://localhost:8808"
Another option (something I typically do). Is to have a file checked into your repo named "config.sample.json" with configuration information (in this case your username/password [development] restriction). Then you also set up git to ignore "config.json" (so you don't accidentally commit this to your repository and have to edit files on your production deployments).
Next you would write a function that would require that config.json file and use it's configuration data if the file is found otherwise it would load up as "production".
Then you would deploy your code to your development directory and afterward rename your "config.sample.json" to "config.json" and make any edits that were needed in that file to setup debugging, access control, etc.