Im having issues to use LDAP in my Gitlab Server Repo. LDAP Server is not syncronizing. If i do a "LdapGroupSyncWorker" or "LdapSyncWorker". What i got:
irb(main):004:0> LdapGroupSyncWorker.new.perform 1
License Load (0.8ms) /*application:console,db_config_name:main*/ SELECT "licenses".* FROM "licenses" ORDER BY "licenses"."id" DESC LIMIT 100
If i do a "ldapsearch -D...." with the same filter used in GITLAB LDAP configuration, it shows all the user included , as a resume i have 171 users:
# search result
search: 2
result: 0 Success
# numResponses: 172
# numEntries: 171
What this is mean ? I only can get 100 users ? How can i change/increase this ?
Im using :
Ruby: ruby 2.7.5p203 (2021-11-24 revision f69aeb8314) [x86_64-linux]
GitLab: 14.9.5-ee (661f7e4160f) EE
GitLab Shell: 13.24.0
PostgreSQL: 12.7
------------------------------------------------------------[ booted in 38.06s ]
Loading production environment (Rails 6.1.4.6)
I execute a "gitlab-rake gitlab:ldap:check[200] --trace" and i got:
LDAP: ... Server: ldapmain
LDAP authentication... Success
LDAP users with access to your GitLab server (only showing the first 200 results)
DN: uid=test_user,cn=users,cn=accounts,dc=tst,dc=test,dc=es uid: test_user
..........
..........
Checking LDAP ... Finished
The issue is that when i try to add members from LDAP to a Project in Gitlab, users showed in this command didnt exist in the list of "Gitlab Members"
Related
yst_c_testInbound is an existing job in the box yst_b_test_Inbound_U01.
Changing DNS alias name from old DNS name "str-uat.capint.com" to new DNS name "str-r7uat.capint.com"
set AUTOSERV & set SERVER1 & set SERVER2 being set properly.
Job is created successfully if the machine name is given for the tag "machine" in the jil file content. Old DNS name also working file.
It is giving the following Error for the new DNS name. Pls let me know what is the issue with DNS.
ping of the str-r7uat.capint.com is working fine
Error:
C:\AutoSys_Tools\bin>jil < yst_c_testInbound.jil
CAUAJM_I_50323 Inserting/Updating job: yst_c_testInbound
CAUAJM_E_10281 ERROR for Job: yst_c_testInbound < machine 'str-r7uat.capint.com' does not exist >
CAUAJM_E_10302 Database Change WAS NOT successful.
CAUAJM_E_50198 Exit Code = 1
JIL file Content - yst_c_testInbound.jil
update_job: yst_c_testInbound job_type: CMD
box_name: yst_b_test_Inbound_U01
command: perl -w $SYSTR_PL/strInBound.pl -PortNo 12222
machine: str-r7uat.capint.com
owner: testulnx
permission:
date_conditions: 0
description: "JMS Flow process to send the messages from STR to MQ"
std_out_file: ">>$STR_LOG/tradeflow_arts_impact_$$YST_STR_CURR_BUS_DATE.log"
std_err_file: ">>$STR_LOG/tradeflow_arts_impact_$$YST_STR_CURR_BUS_DATE.log"
alarm_if_fail: 0
profile: "/apps/profile/test_profile"
alarm_if_terminated: 0
timezone: US/Eastern
While creating the job using the JIL file yst_c_testInbound.jil
Getting below error
You need to add the machine first. You cant update a job with not defined machine.
If you run:
autorep -M str-r7uat.capint.com
It will most likely return CAUAJM_E_50111 Invalid Machine Name: str-r7uat.capint.com
so add the machine first, then you can run the update job JIL.
Cheers.
I am wondering if any one would know about this problem: I am starting a Keycloak as a Gitlab service in order to run integration tests in a pipeline, using the "--import-realm" option. It works very well locally, and it works some of the times in Gitlab. However, sometimes (I'd say a little more than 50%), the realm is simply not imported, without any error message (and then of course my test fails).
Here is my job description:
integration-tests-common:
variables:
FF_NETWORK_PER_BUILD: "true"
KEYCLOAK_DATA_IMPORT_DIR: /builds/js-dev/myproject/Keycloak-testapp/data
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
KC_HTTPS_CERTIFICATE_FILE: /opt/keycloak/certificates/keycloak.crt.pem
KC_HTTPS_CERTIFICATE_KEY_FILE: /opt/keycloak/certificates/keycloak.key.pem
services:
#(custom image below is based on quay.io/keycloak/keycloak:18.0.2)
- name: myinternalrepo/mykeycloakimage:mytag
alias: keycloak
command: ["start-dev","--import-realm", "--health-enabled=true", "--http-port=8089","--log=console,file"]
script:
# Before E2E tests: First wait for keycloak
- |
set -x
count=0;
while [ "$(curl -s -o /dev/null -w '%{http_code}' http://keycloak:8089/health )" != "200" ]
do
echo "waiting for Keycloak..."
sleep 1;
let count=count+1;
if [ $count -gt 100 ]
then
echo "Keycloak is not starting, exiting"
exit 1;
fi
done
echo "Keycloak is UP after $count retries"
set +x
#... (the rest is my integration test)
KEYCLOAK_DATA_IMPORT_DIR is used by a custom entrypoint to create a symbolic link to /opt/keycloak/data/import (since I cannot mount a volume for a Gitlab service, as far as I know):
ln -s $KEYCLOAK_DATA_IMPORT_DIR /opt/keycloak/data/import
In working cases, I have this log:
2022-08-02 05:46:14,468 INFO [org.keycloak.services] (main) KC-SERVICES0050: Initializing master realm
2022-08-02 05:46:19,869 INFO [org.keycloak.services] (main) KC-SERVICES0004: Imported realm test from file /opt/keycloak/bin/../data/import/realm-export.json.
2022-08-02 05:46:20,232 INFO [org.keycloak.services] (main) KC-SERVICES0009: Added user 'admin' to realm 'master'
But in other cases, the log shows no error, it continues as if the import option was not given:
2022-08-02 06:04:14,230 INFO [org.keycloak.services] (main) KC-SERVICES0050: Initializing master realm
2022-08-02 06:04:18,220 INFO [org.keycloak.services] (main) KC-SERVICES0009: Added user 'admin' to realm 'master'
I have also added an nginx in the keycloak custom image exposing the Keycloak logs (because it's difficult to get full logs from Gitlab services otherwise!), but I couldn't find anything more in them.
I dont't know if this is a problem with my custom entrypoint and the symbolic link, with Keycloak, or related to Gitlab services...all I know is that when it fails, I retry the job, sometime multiples times, and usually it finally works. Any help would be appreciated.
By adding a "ls" in my custom Keycloak image entrypoint, I noticed that the Gitlab project files are not present in the error cases. So this is more a Gitlab services issue than a Keycloak issue.
In addition, it is not clear from the Gitlab services doc (https://docs.gitlab.com/ee/ci/services/) if they are supposed to access the project files or not. I had assumed so, because I made a test which worked. But finally, the solution was to integrate my realm's file into my base docker image, and not rely on the files from the repo.
I try to use Gitlab CI / CD with GilLab Runner for the first time. GitLab is a self-hosted version.
I have a git project in GitLab. By default users must use SSH with private key to use git commands.
But I would like to specify that access control is set to both : SSH and HTTPS.
For this project I create a simple .gitlab-ci.yml file (with a single test step) and a GitLab runner using the token specify by GitLab.
When I run the job, the following error occurs :
Running with gitlab-runner 12.1.0 (de7731dd)
on BK_runner_Test3 hcVfxLhx
Using Shell executor...
Running on ns344345...
Fetching changes...
Reinitialized existing Git repository in /home/gitlab-runner/builds/hcVfxLhx/0/XXXXXXXXX/test/.git/
fatal: unable to access 'https://gitlab-ci-token:[MASKED]#gitlab.xxxxxxxx.com/XXXXXXXX/test.git/': The requested URL returned error: 500
ERROR: Job failed: exit status 1
Finally, in the Apache logs, I found a request with error 401 and 500.
So, I think, at that time, the job try to clone the project. So I try the following command :
git clone https://gitlab-ci-token:[MASKED]#gitlab.xxxxxxxx.com/XXXXXXXX/test.git/
And the result is an error 401 !
Of course [MASKED] was replaced with the right token, used to declare tue runner.
So why this error 401 ? What did I miss in the configuration in order to authorize jobs using git commands ?
More information from log file /var/log/gitlab/gitlab-rails/production.log
Started GET "/XXXXXXXX/test.git/info/refs?service=git-upload-pack" for 111.222.333.444 at 2019-08-08 15:39:05 +0200
Processing by Projects::GitHttpController#info_refs as */*
Parameters: {"service"=>"git-upload-pack", "namespace_id"=>"XXXXXXXX", "project_id"=>"test.git"}
Filter chain halted as :authenticate_user rendered or redirected
Completed 401 Unauthorized in 15ms (Views: 0.7ms | ActiveRecord: 1.4ms | Elasticsearch: 0.0ms)
Started GET "/XXXXXXXX/test.git/info/refs?service=git-upload-pack" for 111.222.333.444 at 2019-08-08 15:39:05 +0200
Processing by Projects::GitHttpController#info_refs as */*
Parameters: {"service"=>"git-upload-pack", "namespace_id"=>"XXXXXXXX", "project_id"=>"test.git"}
Completed 500 Internal Server Error in 12ms (ActiveRecord: 3.0ms | Elasticsearch: 0.0ms)
JWT::DecodeError (Nil JSON web token):
It looks like the Git request is being authenticated/authorized correctly in GitLab Rails, but when it's passed over to GitLab Workhorse to actually send the data to the client, some authorization fails.
Check workhorse logs to see if there's more information. That will be in /var/log/gitlab/gitlab-workhorse/ by default in an Omnibus installation. I'm not exactly sure what would cause this but for some reason the header that authorizes workhorse isn't present.
Make sure your GitLab version is reasonably close to your runner version. I see you're running Runner 12.1 so GitLab should also be 12.1.
When I run my job on Gitlab CI/CD, after a while I obtain the following error message:
Job's log exceeded limit of 4194304 bytes.
How to change this limit?
To change the build log size of your jobs in Gitlab CI/CD, you can edit your config.toml file and add a new limit in kilobytes:
[[runners]]
output_limit = 10000
According to the documentation
output_limit : Maximum build log size in kilobytes. Default is 4096 (4MB).
For this to take effect, you need to restart the gitlab runner:
sudo gitlab-runner restart
So an answer for those that don't have access to the gitlab-runners configuration file that #Ortomala Lokni refers too.
You can redirect the logger output and archive it quite easily by doing the following (Note: this is done for maven builds).
quality-check:
extends: .retry-on-job-failure
stage: quality-check
timeout: 2 hours
artifacts:
name: "$CI_BUILD"
paths:
- target/client-quality_check.log
when: always
expire_in: 3 days
only:
- main
- merge_requests
script:
- echo "Sonar Qube Start"
- mvn MAVEN_CLI_OPTS sonar:sonar --log-file target/client-quality_check.log \-Dsonar.projectKey=$PROJECT_ KEY \-Dsonar.host.url=$SONAR_HOST_URL \-Dsonar.login=$SONAR_TOKEN
- echo "Sonar Qube Complete"
Notice within the maven command I am using the --log-file to redirect the maven output to target/client-quality_check.log and then under artifacts I have set to archive this log file by providing the path to the file.
Once this Job finishes I can then look at the Jobs archives and can see my log file with all the logger output in it.
Starting from gitlab 14.1 there is another configuration option that effects maximum log size: ci_jobs_trace_size_limit (100MB by default). So altering only runner limit, as described in other answers, is not sufficient anymore.
Since gitlab is all about speed and usability, modifying ci_jobs_trace_size_limit is possible only by executing command directly in rails console of the system (or docker container) where gitlab is running.
root#192:/# gitlab-rails console -e production
--------------------------------------------------------------------------------
Ruby: ruby 2.7.5p203 (2021-11-24 revision f69aeb8314) [x86_64-linux]
GitLab: 14.8.2 (c7be43f6dd3) FOSS
GitLab Shell: 13.23.2
PostgreSQL: 12.7
-----------------------------------------------------------[ booted in 122.49s ]
Loading production environment (Rails 6.1.4.6)
irb(main):001:0> Plan.default.actual_limits.update!(ci_jobs_trace_size_limit: 100000000)
=> true
irb(main):002:0> quit
Note: if it seems like gitlab-rails console -e production doesn't do anything and console prompt doesn't pop up you'll need to wait.
I want to use https://forge.puppetlabs.com/example42/splunk to setup splunk on some of my systems.
So on my puppet master I did puppet module install example42-splunk.
I use the PE console so I added the class splunk and associated splunk with a group that has one of my nodes, my-mongo-1.
I logon to my-mongo-1 and execute ...
[root#my-mongo-1 ~]# puppet agent -t
...
Info: Caching catalog for my-mongo-1
Info: Applying configuration version '1417030622'
Notice: /Stage[main]/Splunk/Package[splunk]/ensure: created
Notice: /Stage[main]/Splunk/Exec[splunk_create_service]/returns: executed successfully
Notice: /Stage[main]/Splunk/File[splunk_change_admin_password]/ensure: created
Info: /Stage[main]/Splunk/File[splunk_change_admin_password]: Scheduling refresh of Exec[splunk_change_admin_password]
Notice: /Stage[main]/Splunk/Service[splunk]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Splunk/Service[splunk]: Unscheduling refresh on Service[splunk]
Notice: /Stage[main]/Splunk/Exec[splunk_change_admin_password]/returns: Could not look up HOME variable. Auth tokens cannot be cached.
Notice: /Stage[main]/Splunk/Exec[splunk_change_admin_password]/returns:
Notice: /Stage[main]/Splunk/Exec[splunk_change_admin_password]/returns: In handler 'users': The password cannot be set to the default password.
Error: /Stage[main]/Splunk/Exec[splunk_change_admin_password]: Failed to call refresh: /opt/splunkforwarder/bin/puppet_change_admin_password returned 22 instead of one of [0]
Error: /Stage[main]/Splunk/Exec[splunk_change_admin_password]: /opt/splunkforwarder/bin/puppet_change_admin_password returned 22 instead of one of [0]
Notice: Finished catalog run in 11.03 seconds
So what am I doing wrong here?
Why do I get the Could not look up HOME variable. Auth tokens cannot be cached. error?
I saw you asked this on Ask Puppet, and gave it a quick test in Vagrant, and there are two solutions:
1) Give a different password for Splunk in Puppet (as it's complaining about using the default password)
class { "splunk":
install => "server",
admin_password => 'n3wP4assw0rd',
}
2) Upgrade the module to a newer version that doesn't have this issue:
puppet module upgrade example42-splunk --force