I am working through a python pyramid tutorial, and I have been making as many notes as I can inside the files I am writing.
something strange happened, I'd like to know why.
I wrote the development.ini file like was done in the tutorial then added notes.
# we are using this file for configureation in development
# config our wsgi
[app:main]
# which entry point to use as the app
use = egg:mysite
# reloads when templates are changed, not to be used in production
pyramid.reload_templates = true
#which server to use
[server:main]
#get the main entry point from the waitress package
use = egg:waitress#main
host = 0.0.0.0
port = 6534
# this is a great way to remove code for the rest of our package
# more importantly this file is easy to tweak for launching our package in a different manner
running pserve development.ini chrome returns:
This site can’t be reached
0.0.0.0 refused to connect.
Search Google for 6543
ERR_CONNECTION_REFUSED
I remove the comments:
[app:main]
use = egg:mysite
pyramid.reload_templates = true
[server:main]
use = egg:waitress#main
host = 0.0.0.0
port = 6534
I receive the same error as before.
This site can’t be reached
0.0.0.0 refused to connect.
Search Google for 6543
ERR_CONNECTION_REFUSED
THEN I copy and paste code from the tutorials repo into development.ini:
[app:main]
use = egg:mysite
pyramid.reload_templates = true
[server:main]
use = egg:waitress#main
host = 0.0.0.0
port = 6543
I am successfully able to reach localhost.
I am most interested to know, why this happened, how to avoid this problem, and if possible how to comment a development.ini file.
note:
I am using PyCharm as my ide
I am running this code on my local computer
ERR_CONNECTION_REFUSED means that the port you entered in Chrome's address bar does not align with the port number configuration in your .ini file. Look very carefully at your port numbers to make sure they align. You transposed the 4 and 3 in your original .ini (port = 6534), so I assume you tried to reach http://0.0.0.0:6543 in Chrome's address bar.
Bonus tip: PyCharm allows you to compare a single file's history on disk as well as version control. This helps reveal typographical errors. Right/CTRL-click on the file, Local History > Show History.
Related
I am searching for some help in how to configure Azure PostgreSQL DB in a Docker Swarm based Gitlab instance.
Initially, I followed the documentation in https://docs.gitlab.com/13.6/ee/administration/postgresql/external.html. Yet I came to find out that the default provided user is in the form of username, whereas Azure requires it to be in the form of username#hostname. I tried passing the username in the gitlab.rb file (gitlab_rails['db_username'] = 'username#hostname') but it still failed, even after replacing the # with the %40 as URI encoded.
After some extensive searching, I found this documentation - https://docs.gitlab.com/13.6/ee/administration/environment_variables.html, which suggests using the DATABASE_URL environment variable to set the full connection string in the form postgresql://username:password#hostname:port/dbname, which I did and it did solve the issue for Gitlab itself communicating with Azure PostgreSQL (in this case I replaced the username with username%40hostname, according to Azure requirements).
Allas, the success was short lived since then I came to find out that neither Puma and Sidekiq can connect to the database, always throwing the following error:
==> /var/log/gitlab/sidekiq/current <==
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"?
After some searching, I found that gitlab-ctl is generating the following file when starting the Gitlab instance:
# This file is managed by gitlab-ctl. Manual changes will be
# erased! To change the contents below, edit /etc/gitlab/gitlab.rb
# and run `sudo gitlab-ctl reconfigure`.
production:
adapter: postgresql
encoding: unicode
collation:
database: <database>
username: "<username>"
password:
host: "/var/opt/gitlab/postgresql"
port: 5432
socket:
sslmode:
sslcompression: 0
sslrootcert:
sslca:
load_balancing: {"hosts":[]}
prepared_statements: false
statement_limit: 1000
connect_timeout:
variables:
statement_timeout:
(database and username where removed)
Pretty much it ignores the DATABASE_URL env variable and assumes the now non-existing configuration parameters in gitlab.rb.
So, right now, I'm a bit out of options and was wondering if anyone has had a similar issue and, if so, how where you able to overcome this.
Any help is appreciated.
Thanks in advance.
TL/DR: Pass the username#hostname string directly into the gitlab_rails['db_username'] in double quotes. The documentation for connecting to an Azure PostgreSQL in the official Gitlab page is not correct.
So, after some searching and going deep into the Gitlab configuration, I came to find out that the issue is very specific and related with the usage of docker secrets.
In my gitlab.rb configuration file, in the database configuration part, I'm using the following:
### GitLab database settings
###! Docs: https://docs.gitlab.com/omnibus/settings/database.html
###! **Only needed if you use an external database.**
gitlab_rails['db_adapter'] = "postgresql"
gitlab_rails['db_encoding'] = "unicode"
gitlab_rails['db_database'] = File.read('/run/secrets/postgresql_database')
gitlab_rails['db_username'] = File.read('/run/secrets/postgresql_user')
gitlab_rails['db_password'] = File.read('/run/secrets/postgresql_password')
gitlab_rails['db_host'] = File.read('/run/secrets/postgresql_host')
gitlab_rails['db_port'] = File.read('/run/secrets/postgresql_port')
gitlab_rails['db_sslmode'] = 'require'
Now, this exact configuration was used previously for testing purposes and worked (but without the usage of Azure PostgreSQL database). And I'm passing the correct secrets to docker and I've confirmed that the secrets in fact, do exist.
(Sidenote: Also, I've established that Gitlab uses the method ActiveRecord::Base.establish_connection from the Ruby ActiveRecord::Base library in order to connect to the database)
Yet, when using the username#hostname configuration for the user and passing that into the postgresql_user secret, suddenly the ActiveRecord::Base.establish_connection method assumes that the #hostname is the actual hostname to where I want to connect to. And I've confirmed that the secret is being generated correctly inside the docker container
Now, it gets even stranger because if I pass the username#hostname string directly to the gitlab.rb file - gitlab_rails['db_username'] parameter - in double quotes, it suddenly starts connecting without complaining.
So, in short, if using an Azure PostgreSQL database for a dockerized Gitlab instance and using secrets to pass the configuration to the gitlab.rb file, don't pass the username#hostame through a secret, but put it directly in the gitlab.rb file.
I don't know if this is a specific issue of Ruby or of Gitlab (I'm not a Ruby developer), but I did try converting the File.read output to a String, to a symbol, used the File.open('filepath', &:readline) and other shenanigans, but nothing worked. So, if anyone out there would care to add their reason for this, please feel free to do so.
Also, the tutorial provided by Azure - https://learn.microsoft.com/pt-pt/azure/postgresql/connect-ruby - doesn't work with Gitlab, since it complains about the %40.
Hope this can help anyone out there.
I have a basic ActiveStorage setup with one model that has_many_attached :file_attachments. In a service elsewhere I'm trying to generate a link to be used outside the main app (email, job etc).
With S3 in production I can do:
item.file_attachments.first.service_url and I get an appropriate link to the S3 bucket+object.
I cannot use the method prescribed in the rails guides: Rails.application.routes.url_helpers.rails_blob_path(item.file_attachments.first)
It errors with:
ArgumentError: Missing host to link to! Please provide the :host parameter, set default_url_options[:host], or set :only_path to true
I can pass it a host: 'http://....' argument and it's happy although it still doesn't generate the full URL, just the path.
In development I'm using disk backed file storage and I can't use either method:
> Rails.application.routes.url_helpers.rails_blob_path(item.file_attachments.first)
ArgumentError: Missing host to link to! Please provide the :host parameter, set default_url_options[:host], or set :only_path to true
Setting host here also doesn't generate a full URL.
In production service_url works, however here in development I get the error:
> item.file_attachments.first.service_url
ArgumentError: Missing host to link to! Please provide the :host parameter, set default_url_options[:host], or set :only_path to true
and specifying a host doesn't help:
item.file_attachments.first.service_url(host:'http://localhost.com')
ArgumentError: unknown keyword: host
I've also tried adding
config.action_mailer.default_url_options = { :host => "localhost:3000" }
config.action_storage.default_url_options = { :host => "localhost:3000" }
Rails.application.routes.default_url_options[:host] = 'localhost:3000'
with no success.
My question is - how can I get the full URL in a manner that works in both development and production? or where do I set the host at?
Active Storage’s disk service expects to find a host for URL generation in ActiveStorage::Current.host.
When you call ActiveStorage::Blob#service_url manually, ensure ActiveStorage::Current.host is set. If you call it from a controller, you can subclass ActiveStorage::BaseController. If that’s not an option, set ActiveStorage::Current.host in a before_action hook:
class Items::FilesController < ApplicationController
before_action do
ActiveStorage::Current.host = request.base_url
end
end
Outside of a controller, use ActiveStorage::Current.set to provide a host:
ActiveStorage::Current.set(host: "https://www.example.com") do
item.file_attachments.first.service_url
end
I needed the url for an image stored in ActiveStorage.
The image was
Post.first.image
Wrong
Post.first.image.url
Post.first.image.service_url
Right
url_for(Post.first.image)
I had a similar problem. I had to force the port to be different due to a container setup. In my case the redirect done by ActiveStorage included the wrong port. For me adjusting the default_url_options worked:
before_action :configure_active_storage_for_docker, if: -> { Rails.env.development? }
def configure_active_storage_for_docker
Rails.application.routes.default_url_options[:port] = 4000
end
I'm behind a firewall and lazybones can't reach its repository without a proxy.
I've searched the source and can't seem to find any reference to a proxy that seems to be relevant.
Support was officially added in version 0.8.1 of Lazybones, albeit via a general mechanism to add arbitrary system properties to the application in its configuration file, ~/.lazybones/config.groovy.
You can read about the details in the project README, but in essence, simply add the following to your config.groovy file:
systemProp {
http {
proxyHost = "localhost"
proxyPort = 8181
}
https {
proxyHost = "localhost"
proxyPort = 8181
}
}
You can use the systemProp. prefix to add any system properties to Lazybones, similar to the way it works in Gradle.
Is that what You're looking for? Basically You need to add some properties to gradle.properties file.
I am using Cygwin on Windows and I have modified the last line of
~/.gvm/lazybones/current/bin/lazybones
to say
exec "$JAVACMD" "${JVM_OPTS[#]}" -classpath "$CLASSPATH" "-Dhttp.proxyHost=127.0.0.1" "-Dhttp.proxyPort=8888" "-Dhttp.nonProxyHosts=localhost|127.0.0.1" uk.co.cacoethes.lazybones.LazybonesMain "$#"
Please note the quotes around the options. It works very well with my local Fiddler installation.
I have found no better way to enable proxy support due to the way the script is using eval. Maybe a more experienced shell script programmer can come up with a more elegant solution.
I was able to get out through the proxy setting the environment settings of
Picked up JAVA_TOOL_OPTIONS: -Dhttp.proxyHost=127.0.0.1 -Dhttp.proxyPort=8080
-Dhttp.nonProxyHosts="lmig.com" -Dhttps.proxyHost=127.0.0.1 -Dhttps.proxyPort=8080
unfortunately my environment requires authentication so I couldn't provide the complete proxy this way. I first ran "OWASP Zed Attach Proxy (ZAP)" which allowed me to run a proxy on my own machine (at port 8080) which then provided the complete authentication required.
This was able to then run the complete "lazybones list" command which retrieved the contents of the respositories.
Unfortunately I was not able to create an application from those templates becuase bintray required a login (though an anonymous login would do) and couldn't seem to get an additional level of authentication (I received "Unauthorized" from bintray)
I'm trying to set an image url in jade...
I have this: img(src = 'http://192.168.1.8:8081')
I need to change 192.168.1.8 automatically with the server address...
For example if I connect to my server from office, my url should become img(src = 'http://myPUBLICserveraddress:8081')
How can I do this?
Thanks
I do this with Dust.js, but the principle should be the same. The way I do this is to set a hostname and port attribute on the app for both development and production (which is assigned in app.configure('development') and app.configure('production')), and then in the templates, I just do the Dust.js equivalent of:
- if (port)
img(src="http://#{host}:#{port}")
- else
img(src="http://#{host}")
And I get what I'm looking for, which is the right link based on the environment (dev vs production).
I'm reading out the mime types from IIS's MimeMap using the command
_mimeTypes = new Dictionary<string, string>();
//load from iis store.
DirectoryEntry Path = new DirectoryEntry("IIS://localhost/MimeMap");
PropertyValueCollection PropValues = Path.Properties["MimeMap"];
IISOle.MimeMap MimeTypeObj;
foreach (var item in PropValues)
{
// IISOle -> Add reference to Active DS IIS Namespace provider
MimeTypeObj = (IISOle.MimeMap)item;
_mimeTypes.Add(MimeTypeObj.Extension, MimeTypeObj.MimeType);
}
Do I need replace the localhost part when I deploy it to my live server? If not, why not and what are the implications of not doing so.
Cheers
It should not be an issue to leave the host as 'localhost'.
After all, you want to get the MimeMap of the machine your app is running on, correct?
A possible complication that I can forsee is that if you are using a third party as a host. They can do anything they want with host headers and it may be possible that localhost is not available for whatever reason.
But you should simply give it a shot and adjust if necessary.
If you leave it like 'Localhost', you will have to run this script directly on the server.
If you change it to fetch the machine name directly, you can think of running this script remotely as well.