I have installed gitlab on AWS server and it is working as expected.
http://ec2-54-167-34-63.compute-1.amazonaws.com/
But when I click on "Registry" tab, I am shown page not found error (500)
The relevant part from /etc/gitlab/gitlab.rb
gitlab_rails['gitlab_default_projects_features_container_registry'] = true
# registry_external_url 'https://registry.gitlab.example.com'
registry_external_url 'http://ec2-54-167-34-63.compute-1.amazonaws.com:4567'
# Settings used by GitLab application
gitlab_rails['registry_enabled'] = true
gitlab_rails['registry_host'] = "http://ec2-54-167-34-63.compute-1.amazonaws.com"
gitlab_rails['registry_port'] = "5005"
gitlab_rails['registry_api_url'] = "http://localhost:5000"
gitlab_rails['registry_key_path'] = "/var/opt/gitlab/gitlab-rails/certificate.key"
gitlab_rails['registry_path'] = "/var/opt/gitlab/gitlab-rails/shared/registry"
gitlab_rails['registry_issuer'] = "omnibus-gitlab-issuer"
# Settings used by Registry application
registry['enable'] = true
registry['username'] = "registry"
registry['group'] = "registry"
registry['uid'] = nil
registry['gid'] = nil
registry['dir'] = "/var/opt/gitlab/registry"
registry['log_directory'] = "/var/log/gitlab/registry"
registry['log_level'] = "info"
registry['rootcertbundle'] = "/var/opt/gitlab/registry/certificate.crt"
registry['storage_delete_enabled'] = true
Update
As per the logs below, I need gitlab-registry.key file in the correct location. What is this file and how do I generate one?
tail /var/log/gitlab/gitlab-rails/production.log
Started GET "/root/test/container_registry" for 125.99.49.46 at 2016-10-24 08:29:27 +0000
Processing by Projects::ContainerRegistryController#index as HTML
Parameters: {"namespace_id"=>"root", "project_id"=>"test"}
Completed 500 Internal Server Error in 23ms (ActiveRecord: 3.5ms)
Errno::ENOENT (No such file or directory # rb_sysopen - /var/opt/gitlab/gitlab-rails/etc/gitlab-registry.key):
lib/json_web_token/rsa_token.rb:20:in `read'
lib/json_web_token/rsa_token.rb:20:in `key_data'
lib/json_web_token/rsa_token.rb:24:in `key'
lib/json_web_token/rsa_token.rb:28:in `public_key'
lib/json_web_token/rsa_token.rb:33:in `kid'
lib/json_web_token/rsa_token.rb:12:in `encoded'
app/services/auth/container_registry_authentication_service.rb:30:in `full_access_token'
app/models/project.rb:421:in `container_registry_repository'
app/controllers/projects/container_registry_controller.rb:28:in `container_registry_repository'
app/controllers/projects/container_registry_controller.rb:8:in `index'
lib/gitlab/request_profiler/middleware.rb:15:in `call'
lib/gitlab/middleware/go.rb:16:in `call'
Update 2
I guess I need to generate a certificate as explained here...
http://www.bonusbits.com/wiki/HowTo:Setup_HTTPS_for_Gitlab
Check the GitLab server log (since it is an error 500. Example of such logs: issue 23019)
There is an issue pending with GitLab 8.13: 23575: No way to enable container registry, with a merge request 7037: Fix typo in project settings that prevents users from enabling container registry.
They might be related with your issue.
Issue 23339 mentions also "sorting out self signed certs problem (my registry is under different domain than gitlab itself)": that should not be the case here.
Issue 23181 (Pushing to Registry Still Frequently Encounters unauthorized: authentication required) suggests that the error is gone ofr docker 1.11+ (so it depends on which version of docker you are using on AWS)
Regarding gitlab-registry.key mentioned by the OP's edit, it should be created by a simple reconfigure, if declared properly.
So double-check:
1316
It turns out it was a typo on my part.
The config key is registry_nginx["ssl_certificate"] not registry_nginx[ssl_certificate].
1218 and merge request 3787 which show how this feature was added.
Try to set only few of the registry settings like :
registry_external_url 'http://ec2-54-167-34-63.compute1.amazonaws.com:4567'
Don't set the gitlab_rail['registry'] and registry['xxxxx'] if you want to keep the default values, and don't set values if you don't know what you are modifying.
About the certificates, check at the very bottom of the gitlab.rb file, and here you can set your certificates for the registry:
registry_nginx['ssl_certificate'] = "/path/to/my/cert.crt"
registry_nginx['ssl_certificate_key'] = "/path/to/my/key.key"
Also check the output of this commands to make a check of your GitLab instance:
sudo gitlab-rake gitlab:check
Just like deporclick did, Set your certificates for the registry as:
registry_nginx['ssl_certificate'] = "/path/to/my/cert.crt"
registry_nginx['ssl_certificate_key'] = "/path/to/my/key.key"
Related
I have install new Gitlab instalation on my Ubuntu server with Apache2 websrver. I created here (in web GUI) new repository like "testik.git" and I get http path:
http://git.domain.tld/testovic/testik.git
But, when I put this in git command on my winPC
git clone http://git.domain.tld/testovic/testik.git
Command return
Cloning into 'testik'...
fatal: unable to access 'http://git.domain.tld/testovic/testik.git/': Could not resolve host: git.domain.tldtestovic
I see unnecessary trailing slash in "fatal" information and I see too, that this slash is missing after "TLD" before "username".
And same problem is, when I open project in webGUI and click on leftside menu e.g. to "Security&Compliance" --> "Configuration" and in e.g. first part named "Static Application Securty Testing (SAST)" try to click on button "Configure with a merge request".
I have got error page with URL starting with
https://git.domain.tldtestovic/testik/-/merge_requests/new?merge_request....
I see here missing slash after TLD again ... when I put manually slash after TLD and press enter, everything is fully funtional.
Do you have somebody any good idea, any solution what to do with missing slash after TLD ? Where could be the problem?
I know, that this is not problem of DNS, because ping git.domain.tld is OK. I suppose, that something wrong need to be in configuration.
In \etc\gitlab\gitlab.rh I have done these manuall changes
external_url 'http://git.domain.tld'
gitlab_rails['trusted_proxies'] = ['<my Ubuntu serverIP>']
gitlab_rails['initial_root_password'] = "<some good password>"
gitlab_rails['db_adapter'] = "postgresql"
gitlab_rails['db_encodig'] = "unicode"
gitlab_rails['db_database'] = "<database name>"
gitlab_rails['db_username'] = "<some username>"
gitlab_rails['db_password'] = "<some another good password>"
gitlab_rails['db_host'] = "<correct address to the nonlocalhost DB server>"
gitlab_rails['db_port'] = 5432
gitlab_workhouse['listen_network'] = "tcp"
gitlab_workhouse['listen_addr'] = "127.0.0.1:8181"
postgresql['enable'] = false
web_server['external_users'] = ['www-data']
nginx['enable'] = false
nginx['redirect_http_to_https'] = true
I read a lot of post, pages, etc. But without success. Thank you very much for the way to functional git with Gitlab.
We are planning to rotate the log that is generated by Tomcat using Logrotate for volume maintenance. When I checked for the logs I was able to find two places in which these logs were been generated "../apache-tomcat-7.0.57/logs" and in the path that is specified in the "logging.properties". I did check in the Tomcat document, from which I was able to understand that Tomcat uses the default path which is "/logs" is no path is mentioned externally in "logging.properties". I was not able to find if I have missed any configuration.
logging.properties file:
handlers = 1catalina.org.apache.juli.FileHandler, 2localhost.org.apache.juli.FileHandler, 3manager.org.apache.juli.FileHandler, 4host-manager.org.apache.juli.FileHandler, java.util.logging.ConsoleHandler
.handlers = 1catalina.org.apache.juli.FileHandler, java.util.logging.ConsoleHandler
############################################################
# Handler specific properties.
# Describes specific configuration info for Handlers.
############################################################
1catalina.org.apache.juli.FileHandler.level = FINE
1catalina.org.apache.juli.FileHandler.directory = <custome path>
1catalina.org.apache.juli.FileHandler.prefix = catalina.
2localhost.org.apache.juli.FileHandler.level = FINE
2localhost.org.apache.juli.FileHandler.directory = <custome path>
2localhost.org.apache.juli.FileHandler.prefix = localhost.
3manager.org.apache.juli.FileHandler.level = FINE
3manager.org.apache.juli.FileHandler.directory = <custome path>
3manager.org.apache.juli.FileHandler.prefix = manager.
4host-manager.org.apache.juli.FileHandler.level = FINE
4host-manager.org.apache.juli.FileHandler.directory = <custome path>
4host-manager.org.apache.juli.FileHandler.prefix = host-manager.
java.util.logging.ConsoleHandler.level = FINE
java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
############################################################
# Facility specific properties.
# Provides extra control for each logger.
############################################################
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].handlers = 2localhost.org.apache.juli.FileHandler
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager].handlers = 3manager.org.apache.juli.FileHandler
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-manager].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-manager].handlers = 4host-manager.org.apache.juli.FileHandler
# For example, set the org.apache.catalina.util.LifecycleBase logger to log
# each component that extends LifecycleBase changing state:
#org.apache.catalina.util.LifecycleBase.level = FINE
# To see debug messages in TldLocationsCache, uncomment the following line:
#org.apache.jasper.compiler.TldLocationsCache.level = FINE
My question is why are the logs getting generated in multiple places and how to make it log in just in one directory for maintaining the same ?
Reference link
https://tomcat.apache.org/tomcat-7.0-doc/logging.html
By default - It'll log to ${catalina.base}/logs which is what you should see in ${catalina.base}/conf/logging.properties
Additionally standard output (aka exception.printStackTrace()) goes into (by default) ${catalina.base}/logs/catalina.out
${catalina.base}/logs/catalina.out can be set to a different file by setting the environment variable CATALINA_OUT or CATALINA_OUT_CMD. So see what CATALINA_OUT_CMD does - It'll be easier to read the comments in ${catalina.home}/bin/catalina.sh
I'm trying to set RabbitMQ to work over SSL.
I have changed the configuration file (/etc/rabbitmq/rabbitmq.config) as mentioned in the following link
https://www.rabbitmq.com/ssl.html to:
# Defaults to rabbit. This can be useful if you want to run more than one node
# per machine - RABBITMQ_NODENAME should be unique per erlang-node-and-machine
# combination. See the clustering on a single machine guide for details:
# http://www.rabbitmq.com/clustering.html#single-machine
#NODENAME=rabbit
# By default RabbitMQ will bind to all interfaces, on IPv4 and IPv6 if
# available. Set this if you only want to bind to one network interface or#
# address family.
#NODE_IP_ADDRESS=127.0.0.1
# Defaults to 5672.
#NODE_PORT=5672
listeners.ssl.default = 5671
ssl_options.cacertfile = /home/myuser/rootca.crt
ssl_options.certfile = /home/myuser/mydomain.com.crt
ssl_options.keyfile = /home/myuser/mydomain.com.key
ssl_options.verify = verify_peer
ssl_options.password = 1234
ssl_options.fail_if_no_peer_cert = false
I keep getting the following errors:
sudo rabbitmq-server
/usr/lib/rabbitmq/bin/rabbitmq-server: 15: /etc/rabbitmq/rabbitmq-env.conf: listeners.ssl.default: not found
If I remove the above line I get the following error:
sudo rabbitmq-server
/usr/lib/rabbitmq/bin/rabbitmq-server: 17: /etc/rabbitmq/rabbitmq-env.conf: ssl_options.cacertfile: not found
It is worth to mention that without the above, SSL configuration, everything works just fine.
Could you please assist?
Thanks :)
It's very important when you request assistance with software that you always state what version of the software you're using. In the case of RabbitMQ, providing the Erlang version and operating system used is also necessary.
In your case, you have (commented-out) environment configuration in /etc/rabbitmq/rabbitmq-env.conf, as well as RabbitMQ configuration, which is not correct. The following lines must be removed from rabbitmq-env.conf and put into the /etc/rabbitmq/rabbitmq.conf file:
listeners.ssl.default = 5671
ssl_options.cacertfile = /home/myuser/rootca.crt
ssl_options.certfile = /home/myuser/mydomain.com.crt
ssl_options.keyfile = /home/myuser/mydomain.com.key
ssl_options.verify = verify_peer
ssl_options.password = 1234
ssl_options.fail_if_no_peer_cert = false
Please also see the documentation
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
in the Rabbitmq.config change the following to listen on 5673
listeners.ssl.default = 5673
I am currently using the Chef supermarket Jenkins cookbook to deploy an instance of Jenkins. I am attempting to enable security in my configuration of the jenkins instance in the _master_war_.rb recipe file. Code to enable security and create a user using an RSA key is below:
require 'openssl'
require 'net/ssh'
unless node.run_state[:jenkins_private_key]
# defaults to /etc/chef-jenkins-api.key
key_path = node['jenkins_chefci']['jenkins_private_key_path']
begin
Chef::Log.info 'Trying to read private key from ' + key_path + ' for the chef user in jenkins'
key = OpenSSL::PKey::RSA.new File.read(key_path)
Chef::Log.info 'Successfully read existing private key'
rescue
key = OpenSSL::PKey::RSA.new 2048
Chef::Log.info 'Generating new key pair for the chef user in jenkins in ' + key_path
file key_path do
content key.to_pem
mode 0500
sensitive true
end
end
public_key = [key.ssh_type, [key.to_blob].pack('m0'), 'auto-generated key'].join(' ')
# Create the Jenkins user with the public key
jenkins_user 'chef' do
id 'chef#' + Chef::Config[:node_name]
full_name 'Chef Client'
public_keys [public_key]
end
# Set the private key on the Jenkins executor
node.run_state[:jenkins_private_key] = key.to_pem
end
When I attempt to apply this recipe in a run list on a managed node, I receive the following error:
NoMethodError
-------------
undefined method `[]' for nil:NilClass
The stacktrace indicates that the error is related to this particular line of code from my recipe file:
>> key_path = node['jenkins_chefci']['jenkins_private_key_path']
I have seen this error (undefined method `[]' for nil:NilClass), quite a bit online, however have not been able to narrow down the root cause in my recipe. Could I be missing something in my recipe file? I'm wondering if the root cause of the error could be related to these couple lines:
require 'openssl'
require 'net/ssh'
all.
I am trying to enable Felix framework security features on Apache Karaf(version 3.0 +),
but I could not find any official (or even unofficial) instructions on doing this.
The system.properties file (in Karaf/etc folder), in fact, contains following contents.
#
# Security properties
#
# To enable OSGi security, uncomment the properties below,
# install the framework-security feature and restart.
#
java.security.policy=${karaf.etc}/all.policy
org.osgi.framework.security=osgi
org.osgi.framework.trust.repositories=${karaf.etc}/trustStore.ks
When I uncomment those two properties and execute Karaf,
it gives the following error message:
Exception in thread "CM Configuration Updater" java.security.AccessControlException: access denied ("org.osgi.framework.AdaptPermission" "org.osgi.framework.wiring.BundleRevision" "adapt")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:372)
at java.security.AccessController.checkPermission(AccessController.java:559)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at org.apache.felix.framework.BundleImpl.checkAdapt(BundleImpl.java:1058)
at org.apache.felix.framework.BundleImpl.adapt(BundleImpl.java:1066)
In all.policy file, it looks like it is giving all the permissions required to all components:
grant {
permission java.security.AllPermission;
};
I've done some googling to find anyone else ran into this issue,
and I found this one: https://issues.apache.org/jira/browse/KARAF-3400
It says this issue actually arises due to a bug.
Is it really a bug or some minor configuration error?
Is there anyone succeeded in enabling felix security on Karaf version 3.0+?