Problem connecting to azure gitlab vm instance after changing external url - azure

I am trying to create a self-managed version of gitlab that runs on azure using this link: https://docs.gitlab.com/ee/install/azure/
It all works fine until I get to the "Change the GitLab external URL" section. I follow the instructions exactly: I replace the external url and I comment out the lines and I run the reconfigure command. But this breaks connections to the vm. I can no longer connect to it at all (previously I could connect, but I would always be redirected to the public unsecure url as the article says).
Now I simply get a "this site can't be reached error" [public ip] refused to connect.
Any ideas what step I'm missing.
I also think the article is slightly outdated because of the section that tells us to rename the utility bitnami uses:
"sudo mv /opt/bitnami/apps/gitlab/bnconfig /opt/bitnami/apps/gitlab/bnconfig.bak"
There is no longer a bnconfig file that exists in the gitlab azure instance.
I would greatly appreciate any help!

I just had the exact same problem. What worked for me was to edit the config by:
sudo vim /etc/gitlab/gitlab.rb
And then enable letsencrypt by changing this line to true:
letsencrypt['enable'] = true
Save, and then do the usual
sudo gitlab-ctl reconfigure
According to the docs it is supposed to be enabled automatically:
Using https in the URL automatically enables, Let’s Encrypt, and sets HTTPS by default
...but that statement no longer holds, it seems.
Also, I did not figure out the missing /opt/bitnami/apps/gitlab/bnconfig file that should be renamed (it is also missing for me), but I don't seem to loose the config after restarting the VM, so this part of the docs just seem outdated.

Related

Gitlab redirecting loop

yesterday I installed gitlab on a vm of mine and configured everything to work with it.
Gitlab listens on port 8081 of my domain (e.g. domain:8081).
I have an apache instance which listens to port 80 and 443, so I did a forward there (e.g. domain/git).
Everything worked fine (except the css theme of domain/git, but thats no problem), but then I changed the root url (I think, I don't know how this settings is called) in the admin section directly in gitlab to http://domain/git to let gitlab show me directly this url if I want to copy a URL to clone.
Now I can't access my gitlab instance, because I do have an redirection loop.
I also can't find where the setting was done by gitlab itself, I guess it's stored in the database and not any file.
Can someone help me figure out how to change this particular configuration back to default?
Thanks in advance!
You likely changed the 'homepage url' used for redirecting logged out users. Instead of hitting the domain mainpage, hit /users/sign_in and you should be able to sign back in as your admin user. Go to the admin section, and clear out the setting.
You instead need to go into your config/gitlab.yml (source install) or /etc/gitlab/gitlab.rb (package install) and set the external_url to be the address you wanted.
Then restart/reconfigure the app to have it used in the git clone instructions.

IIS bindings keep being removed

I'm having a problem where the security certificate for a site is being periodically unbound from port 443 and replaced with another certificate which is sitting on the server. So whenever a user tries to access the site they are met with a 'untrusted' warning.
So when this first happened, I investigated and found the wrong certificate in place so I changed it back. This worked fine for a while but then it happened again. I checked the event logs and the following two warnings are fired:
SSL Certificate Settings deleted for endpoint : 0.0.0.0:443
SSL Certificate Settings created by an admin process for endpoint : 0.0.0.0:443
This happens once or twice a day, and I have to keep rebinding the correct certificate, and I haven't been able to find a solution yet.
The site is running on Windows Server 2012/ IIS 8
According to a couple of online support forums/articles there was an old legacy setting in the ApplicationHost.config file which was supposed to cause this. All references to this that I found referred to a property in the 'customMetaData' section, the property had a specific Id (5506). I couldn't find this specific property anywhere in our ApplicationHost.config file on the server.
Has anyone encountered a similar issue? Or can anyone shed any light on potential causes of this? Having looked around online I'm finding it hard to find much related to my problem, but perhaps I'm not searching for the right thing...
Any advice on this issue would be greatly appreciated.
NOTE:
Have since realised that this happens at 13:00 each day, cant see any significant events that are occurring on the server that might trigger it though...
Resolution
Locate the following property in the section of the applicationHost.config file, and delete it:
<property id="5506" dataType="Binary" userType="1" attributes="None" value="oXiHOzFAMOF0YxIuI7soWvDFEzg=" />
This property is a legacy feature from Internet Information Services (IIS) 6.0 and is no longer needed.
Link to MS Article
If the other answer (property id) doesn't work, follow these steps:
Check if there is an antivirus software in the server. Look for especially HIPS feature. Disable the antivirus and try to reproduce the issue
Check if the site is using a wildcard certificate. This issue occurs when the wildcard certificate has been imported without marking the keys as exportable. In order to solve it, the affected certificate should be uninstalled and it should be imported back again with marking the keys as exportable
Look for System Center Virtual Machine Manager Agent in the server. If it is enabled in the server, disable it and try to reproduce the issue (Reference)
Another process might be using 443 port in the server (Example: Windows Admin Center. Check this post out: 503 Service Unavailable error related to Windows Admin Center)
Check if insecure protocols are enabled. Registry settings are below. Disable these protocols if they are enabled and try to reproduce the issue
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL
2.0\Server HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL
3.0\Client HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\SSL
3.0\Server HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS
1.0\Client
Source: SSL Certificate Settings deleted for endpoint (Event ID 15300)

OpenCart 2.0.0. - SyntaxError: JSON.parse: unexpected end of data at line 1 column 1 of the JSON data OK

In OpenCart v. 2.0.0 in /admin when I receive an order I should be able to change status of the order to let the customer know what is going on with her purchase. This functionality is in 'History' tab.
Unfortunately I can't change or modify any orders.
When I change the status and try to save it Openart gives me an error:
SyntaxError: JSON.parse: unexpected end of data at line 1 column 1 of the JSON data OK
It happens even on clean install. Everything is set up correctly - the domain name is OK, the shop is not in maintenance mode, there is no password in .htaccess file (BTW even removing it doesn't help).
It looks like in the screenshot attached.
The one solution is to install the newest version of OpenCart which is in the moment I'm writing this post 2.0.1.1. It's probably the best idea.
My problem is that I have made quite a lot of modifications and that would be very difficult for me. Making my changes I didn't use vqmod which probably I should use (but nobody is perfect - isn't she?).
Or maybe you don't want to go into upgrade for a different reason?
Now, how to resolve this problem without actually changing the software itself?
I got the same error and I fixed it:
Go to Admin -> System -> Users -> API
Add a new API generate password, and enable it
Then go to Admin -> System -> Settings -> Edit -> Option(tab)
Select API user under Checkout section as your API user
Then save the changes
Then go to Sales -> Orders -> Edit
I guess that the error comes when a cURL method in your OC tries to get data from the OC API and gets blocked.
-- Quick Testing
Goto function info() in /admin/controller/sale/order.php
find the curl initialization inside info() and aim on $json = curl_exec($curl);
add after that:
if($json === false){
var_dump(curl_error($curl));
exit;
}
if there is an error, you should see it when you go to an order info from your admin panel.
You may have a refused connection msg on 80 or 443 port.
If yes, possible errors and solutions may be:
You are working in a protected directory.
This is a website message (like js alert with fields) that asks for user/pass.
You may turn it off and try again to add a history record.
In Plesk you can find it as "Password Protected Directory" in domain options.
If you recently modify your .htaccess, you should check it for auth statements causing the above symptoms.
PHP; Check if you have added any $_SERVER['REMOTE_ADDR'] into an if block to reject ips on purpose.
When using Plesk, have root access (ssh), and you have the same website domain as the Plesk admin panel hostname,
(assume that your domain is www.yourdomain.com)
you can check from your ssh, executing curl -vv "http://www.yourdomain.com" (or https) inside of your own server (localhost) which your domain is hosted.
If you get a refused connection on port 80, or 443 for https and have a success return msg on curl -vv "http://www.yourdomain.com:7080" OR curl -vv "http://www.yourdomain.com:7081" (virtualhost ports) then you can fix it as below. It is also strange to get a successfull connection msg on 80 & 443 when try it from other server!
So, you have to modify your server hosts (/etc/hosts) and add right to your public ip your full website domain (last line usually).
So, add to the public-ip line xxx.xxx.xxx.xxx www.yourdomain.com. You may have other hosts defined right on your public ip. Please, do not remove any. Add www.yourdomain.com in the last of the piblic-ip line. You can check again with curl, or adding an order history.
I may help,
giannisepp
After some hard time I managed to solve this problem and I want to share this solution with you:
In the admin panel of your shop go to:
Admin > Settings > Users > API
There should be one user with name like
XrpeYEWrFHOcqB1phjBXdUCRO1A3sCvDpgmTGBcJ7G6WuYIMKXCrIJUpzvFPfimWT6LHQLisTYz0nuOy7ZK
if there is not create one and give her reasonably complicated name like in the example.
Then you have to check out your database (using phpMyAdmin helps a lot) where keeps your store data.
find the api table which contains the API user and check the api_id of this user.
find table setting (by default OpenCart instalation it's named oc_setting) and find key config_api_id.
Set the value field to the same number as the api_id you had found in the api table.
Problem should be solved.
In my OpenCart 2.0.0 installation the value was set to 0 while the api_id was 2.
I had the same issue and tried all workarounds possible and imaginable.
In the end gave up and called my hosting provider. They found that OpenCart wont run properly on any php version above 5.4.33. Ergo: use php 5.4.33
This might not solve all issues, or might work only in conjunction with other fixes bove, but itÄs worth a try to check which php version your hosting is running by default.
If your server does not support HTTPS, please modify the https to http.
Open the admin/config.php file, modify the https to http.
define('HTTPS_SERVER', 'http://'.$_SERVER['HTTP_HOST'].'/admin/');
define('HTTPS_CATALOG', 'http://'.$_SERVER['HTTP_HOST'].'/');
Disabling maintenance mode did it for me.
In admin/controller/sale/order.php file line below 2438 you should see like this:
if ($store_info) {
$url = $store_info['ssl'];
} else {
$url = HTTPS_CATALOG;
}
As you can see CURL is allways use HTTPS_CATALOG defined constant which contain your shop SSL URL. Why? I don't now.
The solution: edit your config.php on shop root folder and change HTTPS_CATALOG defined constant value to non-ssl URL, simply delete "s" after end of https.

CNAME setup displaying 'Bad Request (400)'

Background:
My Django app is located # www.name-of-app.rhcloud.com Through dns-provider.com I own: www.name-of-app.com
The CName setup is as follows:
name-of-app.com redirects to www.name-of-app.com, www.name-of-app.com is setup as a CNAME alias to www.name-of-app.rhcloud.com
Now if I try to access www.name-of-app.com from any browser I receive a 'Bad Request (400)' error.
I have played around with the following settings:
I can successfully redirect www.name-of-app.com to www.name-of-app.rhcloud.com, but then after switching back to a CNAME I am met with the same error.
I have added the necessary alias ala: rhc alias add www.name-of-app.com -a myApp
I have tried the steps of removing && then re-adding the above alias, to no effect.
If I run the host command from my devel station I see that the alias is correctly set up.
cmd: host www.name-of-app.com(first 2 lines of output are listed below):www.name-of-app.comis an alias forname-of-app.rhcloud.com.name-of-app.rhcloud.comis an alias forex-std-nodeXXX.prod.rhcloud.com`.
I am working with dns-provider.com, but they haven't raised any issues to this point.
Question:
How can I get this CNAME issue resolved? It seems to be out of my control and beyond my domain of expertise at the given moment.
Ironically enough the issue turned out to be a Django related problem (someone removed the Django mail list). Clearly I didn't provide enough information to know that however.
The issue lies in the fact that the CNAME was not enabled in my ALLOWED_HOSTS settings. Upon adding it to the ALLOWED_HOSTS setting, I was able to access the site as expected.
Cheers.
#Ibn Saeed (I don' t have enough reputation to answer, with a comment)
I had the same issue and solved it adding to ALLOWED_HOSTS the exact domain name leaving it like this
ALLOWED_HOSTS = [
'.mydomain.com.',
'mydomain.com'
]

Jenkins URL with own domain

I have installed Jenkins Continues Integration system on my windows server successfully and it works without any errors. But I was unable to get Jenkins URL working with my host domain. The default Jenkins address which is http://localhost:8080 works well. My domain/server name is projectdev so I want to give Jenkins the http://projectdev/jenkins URL so that other developers in my network will be able to access Jenkins dashboard easily.
Although I added http://projectdev/jenkins as the Jenkins URL from the Jenkins configuration sections, it doesn’t work. I can’t access it from other computers in my network. But when I use http://localhost:8080 I can access the dashboard directly.
I also tried to add Jenkins as a web application on IIS and give it the address I want. But I don’t know what to provide as the physical path of Jenkins as it was installed using Jenkins.jar file.
It would be really great if someone can help me to get this done as I want.
Thank you.
You have to configure IIS to re-route requests to {domain}/jenkins to {domain}:8080/jenkins
check the info on:
http://www.iis.net/learn/web-hosting/microsoft-web-farm-framework-20-for-iis-7/setting-up-a-server-farm-with-the-web-farm-framework-20-for-iis
Try this address: http://projectdev:8080/jenkins If you type without 8080 port , the port will be 80 by default.

Resources