I can't figure out how to setup Gitlab Pages on my self-hosted Gitlab instance without wildcard domains. For example, I have 1 server with 3 public IP addresses and domain names:
10.8.0.10 (git.example.com) - main GitLab instance
10.8.0.11 (registry.example.com) - container registry
10.8.0.12 (pages.example.com) - GitLab Pages
Then' I set up the Omnibus config /etc/gitlab/gitlab.rb like that:
external_url 'https://git.example.com'
nginx['enable'] = true
nginx['listen_addresses'] = ['10.8.0.10']
registry_external_url 'https://registry.example.com'
registry_nginx['enable'] = true
registry_nginx['listen_addresses'] = ['10.8.0.11']
registry_nginx['ssl_certificate'] = "/etc/gitlab/ssl/git.example.com.crt"
registry_nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/git.example.com.key"
pages_external_url 'https://pages.example.com'
pages_nginx['enable'] = false
gitlab_pages['enable'] = true
gitlab_pages['cert'] = "/etc/gitlab/ssl/pages.example.com.crt"
gitlab_pages['cert_key'] = "/etc/gitlab/ssl/pages.example.com.key"
gitlab_pages['external_http'] = ['10.8.0.12:80']
gitlab_pages['external_https'] = ['10.8.0.12:443']
For example, I have a project located on https://git.example.com/somegroupname/project. I can get an access to container registry of this project by https://registry.example.com/somegroupname/project and pull the Docker image with command docker pull registry.example.com/somegroupname/project.
I know that GitLab Pages set namespaces as the A-record. In my case, it sets up like https://somegroupname.pages.example.com/project, but I don't have an availability to use such domain names. Instead of that, I want to place the namespace in path like that:
https://pages.example.com/somegroupname/project
but I can't find any settings or parameters to enable this behavior, and it doesn't work with the current setup. All the pages stored correctly in default path /var/opt/gitlab/gitlab-rails/shared/pages/. Can please somebody help me?
Related
I am currently setting up gitlab pages for our internal network. Now I have completed my project and the CI pipeline is working. Now I have gone through all the steps in the gitlab.rb configuration via the gitlab docs but still I can't get gitlab pages to work.
My Gitlab.rb config:
gitlab_pages['enable'] = true
gitlab_pages['pages_external_url'] = pages.domain.xyz
gitlab_pages['external_http'] = ['192.168.x.x:80']
gitlab_pages['external_https'] = ['192.168.x.x:443']
gitlab_pages['cert'] = "/etc/gitlab/ssl/pages.domain.xyz.crt"
gitlab_pages['cert_key'] = "/etc/gitlab/ssl/pages.domain.xyz.key"
gitlab_pages['status_uri'] = "/#status"
gitlab_pages['max_connections'] = 0
gitlab_pages['log_format'] = "json"
gitlab_pages['log_verbose'] = true
gitlab_pages['redirect_http'] = true
gitlab_pages['dir'] = "/var/opt/gitlab/gitlab-pages"
gitlab_pages['log_directory'] = "/var/log/gitlab/gitlab-pages"
gitlab_pages['gitlab_server'] = 'https://gitlab.domain.xyz' # Defaults to external_url
My DNS is as follows:
A record for gitlab instance
A records for pages.domain.xyz
Wildcard for *.pages.domain.xyz
When I go to the pages page in my project the page URL is https://user.pages.domain.xyz/project
and this is not how it works I believe.
I hope someone can help me tackle this problem!
Maybe GitLab 15.4 (September 2022) will help:
Getting started with GitLab Pages just got easier
We’ve made it much easier to get started with GitLab Pages. Instead of
creating configuration files by hand, build them interactively using the
GitLab UI.
Just answer a few basic questions on how your app is built,
and we’ll build the .gitlab-ci.yml file to get you started.
This is the first time we’re using our new Pipeline Wizard,
a tool that makes it easy to create .gitlab-ci.yml files by building
them in the GitLab UI.
You can look forward to more simplified
onboarding helpers like this one.
See Documentation and Issue.
I have a GitLab project with the native CI set up, using a shared runner that I unfortunately do not have root access to. I am nonetheless the developer for the CI, so I am making do by messaging my sysadmin to update the gitlab-runner whenever I need changes/resets/etc.
I've run into trouble setting up the config.toml file in etc/gitlab-runner/ to ensure the output_limit variable is high enough for all of my CI log output. I am following the documentation here, but it seems to be missing a little bit of information as far as requirements for this file go.
How do I actually specify which runner I want to link to in the [[runners]] section of the config file? The name seems to be arbitrary, and the URL seems to have /ci added to the end everywhere I see examples on the internet -- do I need to add that, even if my GitLab URL doesn't include that? I am also not sure which token to use. Currently I am using the token that my shared runner is labeled as active beside in the Settings > CI/CD > Runners dropdown in my project settings.
Here is the content (some redacted) of my config.toml:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "arbitrary-name"
url = "http://IP.IP.IP.IP"
token = "<token-mentioned-above>"
output_limit = 16000
But the output size of my CI output is still stuck at 4092 after a gitlab-runner restart. Do I need to include more than this? What am I missing?
Work for me, here is the my toml file
[[runners]]
name = "serv1"
url = "http://ip"
token = "token"
output_limit = 50000000
Restart runner service
systemctl restart gitlab-runner
I have middle tier files written in python and MongoDB. But in all files when calling MongoDB client production URL is given. Now I want to make one config file where we have URL for both production and development. And in files, I just want to call URL from config file depending on if it is development or production. I am new to python.
Can anyone please tell me how to achieve this?
I usually create a settings directory with a local_settings.py file.
You can swop the vars out based on environment.
Ive gone further and automated environment detection, but its specific to my infrastructure.
Then inside of the file:
DEVELOPMENT = True
STAGING = False
PRODUCTION = False
MONGO_DB_URL_PROD = 'prod.domain.com'
MONGO_DB_URL_STAGING = 'staging.domain.com'
if PRODUCTION:
MONGO_DB_URL = MONGO_DB_URL_PROD
elif STAGING:
MONGO_DB_URL = MONGO_DB_URL_STAGING
else:
MONGO_DB_URL = 'localhost'
then in my code
from setting.local_settings import MONGO_DB_URL
I have a basic ActiveStorage setup with one model that has_many_attached :file_attachments. In a service elsewhere I'm trying to generate a link to be used outside the main app (email, job etc).
With S3 in production I can do:
item.file_attachments.first.service_url and I get an appropriate link to the S3 bucket+object.
I cannot use the method prescribed in the rails guides: Rails.application.routes.url_helpers.rails_blob_path(item.file_attachments.first)
It errors with:
ArgumentError: Missing host to link to! Please provide the :host parameter, set default_url_options[:host], or set :only_path to true
I can pass it a host: 'http://....' argument and it's happy although it still doesn't generate the full URL, just the path.
In development I'm using disk backed file storage and I can't use either method:
> Rails.application.routes.url_helpers.rails_blob_path(item.file_attachments.first)
ArgumentError: Missing host to link to! Please provide the :host parameter, set default_url_options[:host], or set :only_path to true
Setting host here also doesn't generate a full URL.
In production service_url works, however here in development I get the error:
> item.file_attachments.first.service_url
ArgumentError: Missing host to link to! Please provide the :host parameter, set default_url_options[:host], or set :only_path to true
and specifying a host doesn't help:
item.file_attachments.first.service_url(host:'http://localhost.com')
ArgumentError: unknown keyword: host
I've also tried adding
config.action_mailer.default_url_options = { :host => "localhost:3000" }
config.action_storage.default_url_options = { :host => "localhost:3000" }
Rails.application.routes.default_url_options[:host] = 'localhost:3000'
with no success.
My question is - how can I get the full URL in a manner that works in both development and production? or where do I set the host at?
Active Storage’s disk service expects to find a host for URL generation in ActiveStorage::Current.host.
When you call ActiveStorage::Blob#service_url manually, ensure ActiveStorage::Current.host is set. If you call it from a controller, you can subclass ActiveStorage::BaseController. If that’s not an option, set ActiveStorage::Current.host in a before_action hook:
class Items::FilesController < ApplicationController
before_action do
ActiveStorage::Current.host = request.base_url
end
end
Outside of a controller, use ActiveStorage::Current.set to provide a host:
ActiveStorage::Current.set(host: "https://www.example.com") do
item.file_attachments.first.service_url
end
I needed the url for an image stored in ActiveStorage.
The image was
Post.first.image
Wrong
Post.first.image.url
Post.first.image.service_url
Right
url_for(Post.first.image)
I had a similar problem. I had to force the port to be different due to a container setup. In my case the redirect done by ActiveStorage included the wrong port. For me adjusting the default_url_options worked:
before_action :configure_active_storage_for_docker, if: -> { Rails.env.development? }
def configure_active_storage_for_docker
Rails.application.routes.default_url_options[:port] = 4000
end
I have connected Visual Studio Online to my Azure website. This is not a .NET ASP.NET MVC project, just several static HTML files.
Now I want to get my files uploaded to Azure and available 'online' after my commits/pushes to the TFS.
When a build definition (based on GitContinuousDeploymentTemplate.12.xaml) is executed it fails with an obvious message:
Exception Message: The process parameter ProjectsToBuild is required but no value was set.
My question: how do I setup a build definition so that it automatically copies my static files to Azure on commits?
Or do I need to use a different tooling for this task (like WebMatrix).
update
I ended up with creating an empty website and deploying it manually from Visual Studio using webdeploy. Other possible options to consider to create local Git at Azure.
Alright, let me try to give you an answer:
I was having quite a similar issue. I had a static HTML, JS and CSS site which I needed to have in TFS due to the project and wanted to make my life easier using the continuous deployment. So what I did was following:
When you have a Git in TFS, you get an URL for the repository - something like:
https://yoursite.visualstudio.com/COLLECTION/PROJECT/_git/REPOSITORY
, however in order to access the repository itself, you need to authenticate, which is not currently possible, if you try to put the URL with authentication into Azure:
https://username:password#TFS_URL
It will not accept it. So what you do, in order to bind the deployment is that you just put the URL for repository there (the deployment will fail, however it will prepare the environment for us to proceed).
However, when you link it there, you can get DEPLOYMENT TRIGGER URL on the Configure tab of the Website. What it is for is that when you push a change to your repository (say to GitHub) what happens is that GitHub makes a HTTP POST request to that link and it tells Azure to deploy new code onto the site.
Now I went to Kudu which is the underlaying system of Azure Websites which handles the deployments. I figured that if you send correct contents in the HTTP POST (JSON format) to the DEPLOYMENT TRIGGER URL, you can have it deploy code from any repository and it even authenticates!
So the thing left to do is to generate the alternative authentication credentials on the TFS site and put the whole request together. I wrapped this entire process into the following PowerShell script:
# Windows Azure Website Configuration
#
# WAWS_username: The user account which has access to the website, can be obtained from https://manage.windowsazure.com portal on the Configure tab under DEPLOYMENT TRIGGER URL
# WAWS_password: The password for the account specified above
# WAWS: The Azure site name
$WAWS_username = ''
$WAWS_password = ''
$WAWS = ''
# Visual Studio Online Repository Configuration
#
# VSO_username: The user account used for basic authentication in VSO (has to be manually enabled)
# VSO_password: The password for the account specified above
# VSO_URL: The URL to the Git repository (branch is specified on the https://manage.windowsazure.com Configuration tab BRANCH TO DEPLOY
$VSO_username = ''
$VSO_password = ''
$VSO_URL = ''
# DO NOT EDIT ANY OF THE CODE BELOW
$WAWS_URL = 'https://' + $WAWS + '.scm.azurewebsites.net/deploy'
$BODY = '
{
"format": "basic",
"url": "https://' + $VSO_username + ':' + $VSO_password + '#' + $VSO_URL + '"
}'
$authorization = "Basic "+[System.Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($WAWS_username+":"+$WAWS_password ))
$bytes = [System.Text.Encoding]::ASCII.GetBytes($BODY)
$webRequest = [System.Net.WebRequest]::Create($WAWS_URL)
$webRequest.Method = "POST"
$webRequest.Headers.Add("Authorization", $authorization)
$webRequest.ContentLength = $bytes.Length
$webRequestStream = $webRequest.GetRequestStream();
$webRequestStream.Write($bytes, 0, $bytes.Length);
$webRequest.GetResponse()
I hope that what I wrote here makes sense. The last thing you would need is to bind this script to a hook in Git, so when you perform a push the script gets automatically triggered after it and the site is deployed. I haven't figured this piece yet tho.
This should also work to deploy a PHP/Node.js and similar code.
The easiest way would be to add them to an empty ASP .NET project, set them to be copied to the output folder, and then "build" the project.
Failing that, you could modify the build process template, but that's a "last resort" option.