Gitlab SAML Configuration - 404 on metadata - gitlab

Question regarding SAML configuration.
I'm currently running Gitlab 9.1 CE edition on CentOs 7. I have an Apache instance on the front end for a reverse proxy to Gitlab handling http(s)
My gitlab.rb has the following configured
external_url 'http://external.apache.server/gitlab/'
gitlab_rails['omniauth_enabled'] = true
gitlab_rails['omniauth_allow_single_sign_on'] = ['saml']
gitlab_rails['omniauth_auto_sign_in_with_provider'] = 'saml'
gitlab_rails['omniauth_block_auto_created_users'] = false
# gitlab_rails['omniauth_auto_link_ldap_user'] = false
gitlab_rails['omniauth_auto_link_saml_user'] = true
# gitlab_rails['omniauth_external_providers'] = ['twitter', 'google_oauth2']
# gitlab_rails['omniauth_providers'] = [
# {
# "name" => "google_oauth2",
# "app_id" => "YOUR APP ID",
# "app_secret" => "YOUR APP SECRET",
# "args" => { "access_type" => "offline", "approval_prompt" => "" }
# }
# ]
In order to setup SAML my provider is asking for the information returned from http://external.apache.server/gitlab/users/auth/saml/metadata which returns a 404.
In reading the SAML documentation, it mentions that Gitlab needs to be configured for SSL, not sure if this is why the URL mentioned above is returning a 404.
The problem with enabling SSL is that my external URL is already providing that and if I use it as is https://external.apache.server then Gitlab is looking for key/cert for that domain on the box which doesn't seem correct. I don't want to change the external URL as it should be fronted by Apache. Bit confused on what the proper configuration should be.
Thanks

Related

Net Core application not reading ASPNETCORE_ENVIRONMENT value?

I deployed an ASP.NET Core 7 application to Linux Web Application in Azure.
When I access the URL I get an Application Error and the Logs shows:
System.IO.FileNotFoundException:
The configuration file 'settings..json' was not found and is not optional.
It seems it is missing the Environment value so it should be:
settings.production.json
In the Azure Application Service Configuration I have:
[
{
"name": "ASPNETCORE_ENVIRONMENT",
"value": "production",
"slotSetting": false
}
]
And the application Program.cs code is:
Serilog.Log.Logger = new
Serilog.LoggerConfiguration()
.WriteTo.Console(LogEventLevel.Verbose)
.CreateBootstrapLogger();
try {
Serilog.Log.Information("Starting up");
WebApplicationBuilder builder = WebApplication.CreateBuilder(new WebApplicationOptions {
Args = args,
WebRootPath = "webroot"
});
builder.Configuration
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("settings.json", false, true)
.AddJsonFile($"settings.{Environment.GetEnvironmentVariable("ASPNETCORE_ENVIRONMENT")}.json", false, true)
.AddEnvironmentVariables();
// Remaining code
Am I doing something wrong or something change in Net 7?
In short, this problem occurs because the settings.production.json file was not included at the time of release.
We can verify this by uploading the 'settings.production.json' file to the scm site. The URL is https://your_appname_azurewebsites.net/newui .
Solution:
Official doc : Include files
Sampleļ¼š Use ResolvedFileToPublish in ItemGroup

Azure-ML Deployment does NOT see AzureML Environment (wrong version number)

I've followed the documentation pretty well as outlined here.
I've setup my azure machine learning environment the following way:
from azureml.core import Workspace
# Connect to the workspace
ws = Workspace.from_config()
from azureml.core import Environment
from azureml.core import ContainerRegistry
myenv = Environment(name = "myenv")
myenv.inferencing_stack_version = "latest" # This will install the inference specific apt packages.
# Docker
myenv.docker.enabled = True
myenv.docker.base_image_registry.address = "myazureregistry.azurecr.io"
myenv.docker.base_image_registry.username = "myusername"
myenv.docker.base_image_registry.password = "mypassword"
myenv.docker.base_image = "4fb3..."
myenv.docker.arguments = None
# Environment variable (I need python to look at folders
myenv.environment_variables = {"PYTHONPATH":"/root"}
# python
myenv.python.user_managed_dependencies = True
myenv.python.interpreter_path = "/opt/miniconda/envs/myenv/bin/python"
from azureml.core.conda_dependencies import CondaDependencies
conda_dep = CondaDependencies()
conda_dep.add_pip_package("azureml-defaults")
myenv.python.conda_dependencies=conda_dep
myenv.register(workspace=ws) # works!
I have a score.py file configured for inference (not relevant to the problem I'm having)...
I then setup inference configuration
from azureml.core.model import InferenceConfig
inference_config = InferenceConfig(entry_script="score.py", environment=myenv)
I setup my compute cluster:
from azureml.core.compute import ComputeTarget, AksCompute
from azureml.exceptions import ComputeTargetException
# Choose a name for your cluster
aks_name = "theclustername"
# Check to see if the cluster already exists
try:
aks_target = ComputeTarget(workspace=ws, name=aks_name)
print('Found existing compute target')
except ComputeTargetException:
print('Creating a new compute target...')
prov_config = AksCompute.provisioning_configuration(vm_size="Standard_NC6_Promo")
aks_target = ComputeTarget.create(workspace=ws, name=aks_name, provisioning_configuration=prov_config)
aks_target.wait_for_completion(show_output=True)
from azureml.core.webservice import AksWebservice
# Example
gpu_aks_config = AksWebservice.deploy_configuration(autoscale_enabled=False,
num_replicas=3,
cpu_cores=4,
memory_gb=10)
Everything succeeds; then I try and deploy the model for inference:
from azureml.core.model import Model
model = Model(ws, name="thenameofmymodel")
# Name of the web service that is deployed
aks_service_name = 'tryingtodeply'
# Deploy the model
aks_service = Model.deploy(ws,
aks_service_name,
models=[model],
inference_config=inference_config,
deployment_config=gpu_aks_config,
deployment_target=aks_target,
overwrite=True)
aks_service.wait_for_deployment(show_output=True)
print(aks_service.state)
And it fails saying that it can't find the environment. More specifically, my environment version is version 11, but it keeps trying to find an environment with a version number that is 1 higher (i.e., version 12) than the current environment:
FailedERROR - Service deployment polling reached non-successful terminal state, current service state: Failed
Operation ID: 0f03a025-3407-4dc1-9922-a53cc27267d4
More information can be found here:
Error:
{
"code": "BadRequest",
"statusCode": 400,
"message": "The request is invalid",
"details": [
{
"code": "EnvironmentDetailsFetchFailedUserError",
"message": "Failed to fetch details for Environment with Name: myenv Version: 12."
}
]
}
I have tried to manually edit the environment JSON to match the version that azureml is trying to fetch, but nothing works. Can anyone see anything wrong with this code?
Update
Changing the name of the environment (e.g., my_inference_env) and passing it to InferenceConfig seems to be on the right track. However, the error now changes to the following
Running..........
Failed
ERROR - Service deployment polling reached non-successful terminal state, current service state: Failed
Operation ID: f0dfc13b-6fb6-494b-91a7-de42b9384692
More information can be found here: https://some_long_http_address_that_leads_to_nothing
Error:
{
"code": "DeploymentFailed",
"statusCode": 404,
"message": "Deployment not found"
}
Solution
The answer from Anders below is indeed correct regarding the use of azure ML environments. However, the last error I was getting was because I was setting the container image using the digest value (a sha) and NOT the image name and tag (e.g., imagename:tag). Note the line of code in the first block:
myenv.docker.base_image = "4fb3..."
I reference the digest value, but it should be changed to
myenv.docker.base_image = "imagename:tag"
Once I made that change, the deployment succeeded! :)
One concept that took me a while to get was the bifurcation of registering and using an Azure ML Environment. If you have already registered your env, myenv, and none of the details of the your environment have changed, there is no need re-register it with myenv.register(). You can simply get the already register env using Environment.get() like so:
myenv = Environment.get(ws, name='myenv', version=11)
My recommendation would be to name your environment something new: like "model_scoring_env". Register it once, then pass it to the InferenceConfig.

SPNEGO uses wrong KRBTGT principal name

I am trying to enable Kerberos authentication for our website - The idea is to have users logged into a Windows AD domain get automatic login (and initial account creation)
Before I tackle the Windows side of things, I wanted to get it work locally.
So I made a test KDC/KADMIN container using git#github.com:ist-dsi/docker-kerberos.git
Thee webserver is in a local docker container with nginx and the spnego module compiled in.
The KDC/KADMIN container is at 172.17.0.2 and accessible from my webserver container.
Here is my local krb.conf:
default_realm = SERVER.LOCAL
[realms]
SERVER.LOCAL = {
kdc_ports = 88,750
kadmind_port = 749
kdc = 172.17.0.2:88
admin_server = 172.17.0.2:749
}
[domain_realms]
.server.local = SERVER.LOCAL
server.local = SERVER.LOCAL
and the krb.conf on the webserver container
[libdefaults]
default_realm = SERVER.LOCAL
default_keytab_name = FILE:/etc/krb5.keytab
ticket_lifetime = 24h
kdc_timesync = 1
ccache_type = 4
forwardable = false
proxiable = false
[realms]
LOCALHOST.LOCAL = {
kdc_ports = 88,750
kadmind_port = 749
kdc = 172.17.0.2:88
admin_server = 172.17.0.2:749
}
[domain_realms]
.server.local = SERVER.LOCAL
server.local = SERVER.LOCAL
Here is the principals and keytab config (keytab is copied to the web container under /etc/krb5.keytab)
rep ~/project * rep_krb_test $ kadmin -p kadmin/admin#SERVER.LOCAL -w hunter2
Authenticating as principal kadmin/admin#SERVER.LOCAL with password.
kadmin: list_principals
K/M#SERVER.LOCAL
kadmin/99caf4af9dc5#SERVER.LOCAL
kadmin/admin#SERVER.LOCAL
kadmin/changepw#SERVER.LOCAL
krbtgt/SERVER.LOCAL#SERVER.LOCAL
noPermissions#SERVER.LOCAL
rep_movsd#SERVER.LOCAL
kadmin: q
rep ~/project * rep_krb_test $ ktutil
ktutil: addent -password -p rep_movsd#SERVER.LOCAL -k 1 -f
Password for rep_movsd#SERVER.LOCAL:
ktutil: wkt krb5.keytab
ktutil: q
rep ~/project * rep_krb_test $ kinit -C -p rep_movsd#SERVER.LOCAL
Password for rep_movsd#SERVER.LOCAL:
rep ~/project * rep_krb_test $ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: rep_movsd#SERVER.LOCAL
Valid starting Expires Service principal
02/07/20 04:27:44 03/07/20 04:27:38 krbtgt/SERVER.LOCAL#SERVER.LOCAL
The relevant nginx config:
server {
location / {
uwsgi_pass django;
include /usr/lib/proj/lib/wsgi/uwsgi_params;
auth_gss on;
auth_gss_realm SERVER.LOCAL;
auth_gss_service_name HTTP;
}
}
Finally etc/hosts has
# use alternate local IP address
127.0.0.2 server.local server
Now I try to access this with curl:
* Trying 127.0.0.2:80...
* Connected to server.local (127.0.0.2) port 80 (#0)
* gss_init_sec_context() failed: Server krbtgt/LOCAL#SERVER.LOCAL not found in Kerberos database.
* Server auth using Negotiate with user ''
> GET / HTTP/1.1
> Host: server.local
> User-Agent: curl/7.71.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
....
As you can see it is trying to use the SPN "krbtgt/LOCAL#SERVER.LOCAL" whereas kinit has "krbtgt/SERVER.LOCAL#SERVER.LOCAL" as the SPN
How do I get this to work?
Thanks in advance..
So it turns out that I needed
auth_gss_service_name HTTP/server.local;
Some other tips for issues encountered:
Make sure the keytab file is readable by the web server process with user www-data or whatever user
Make sure the keytab principals are in the correct order
Use export KRB5_TRACE=/dev/stderr and curl to test - kerberos gives a very detailed log of what it's doing and why it fails

gitlab Rack_Attack settings

I have a gitlab CE installation with Rack_Attack enabled, but I am not able to find the current values set for the following parameters - max_retry, findtime, and bantime.
So, where and how can I check the values of these parameters in my gitlab installation?
If you installed the Omnibus GitLab version, the settings can be found in /etc/gitlab/gitlab.rb:
gitlab_rails['rack_attack_git_basic_auth'] = {
'enabled' => true,
'ip_whitelist' => ["127.0.0.1"],
'maxretry' => 10,
'findtime' => 60,
'bantime' => 3600
}
If you installed GitLab from source, the settings can be found in config/initializers/rack_attack.rb.
From the Documentation.

Puppet can't find class firewall

I have a basic puppet install using this tutorial https://www.digitalocean.com/community/tutorials/how-to-install-puppet-4-on-ubuntu-16-04
When I run /opt/puppetlabs/bin/puppet agent --test on my node I get
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Error while evaluating a Resource Statement. Could not find declared class firewall at /etc/puppetlabs/code/environments/production/manifests/site.pp:7:1 on node mark-inspiron.
On my node:
/opt/puppetlabs/bin/puppet module list
returns
/etc/puppetlabs/code/environment/production/modules
----- puppetlabs-firewall (v1.9.0)
On my puppet master at /etc/puppetlabs/code/environments/production/manifests/site.pp:
file {'/tmp/it_works.txt': # resource type file and filename
ensure => present, # make sure it exists
mode => '0644', # file permissions
content => "It works on ${ipaddress_eth0}!\n", # Print the eth0 IP fact
}
class { 'firewall': }
resources { 'firewall':
purge => true,
}
firewall { "051 asterisk-set-rate-limit-register":
string => "REGISTER sip:",
string_algo => "bm",
dport => '5060',
proto => 'udp',
recent => 'set',
rname => 'VOIPREGISTER',
rsource => 'true';
}
firewall { "052 asterisk-drop-rate-limit-register":
string => "REGISTER sip:",
string_algo => "bm",
dport => '5060',
proto => 'udp',
action => 'drop',
recent => 'update',
rseconds => '600',
rhitcount => '5',
rname => 'VOIPREGISTER',
rsource => true,
rttl => true;
}
The file part works but not firewall.
You need to install the modules on your master in a master setup with Puppet. They need to be somewhere in your modulepath. You can either place it in the modules directory within your $codedir (normally /etc/puppetlabs/code/modules) or in your directory environment modules directory (likely /etc/puppetlabs/code/environments/production/modules in your case since your cited site.pp is there). If you defined additional module paths in your environment.conf, then you can also place the modules there.
You can install/deploy them with a variety of methods, such as librarian-puppet, r10k, or code-manager (in Enterprise). However, the easiest method for you would be puppet module install puppetlabs-firewall on the master. Your Puppet catalog will then find the firewall class during compilation.
On a side note, that:
resources { 'firewall':
purge => true,
}
will remove any changes to associated firewall configurations (as defined by Puppet's knowledge of the system firewall configuration according to the module's definition of what the resource manages) that are not managed by Puppet. This is nice for eliminating local changes that people make, but it can also have interesting side effects, so be careful.

Resources