ActiveStorage service_url && rails_blob_path cannot generate full url when not using S3 - rails-activestorage

I have a basic ActiveStorage setup with one model that has_many_attached :file_attachments. In a service elsewhere I'm trying to generate a link to be used outside the main app (email, job etc).
With S3 in production I can do:
item.file_attachments.first.service_url and I get an appropriate link to the S3 bucket+object.
I cannot use the method prescribed in the rails guides: Rails.application.routes.url_helpers.rails_blob_path(item.file_attachments.first)
It errors with:
ArgumentError: Missing host to link to! Please provide the :host parameter, set default_url_options[:host], or set :only_path to true
I can pass it a host: 'http://....' argument and it's happy although it still doesn't generate the full URL, just the path.
In development I'm using disk backed file storage and I can't use either method:
> Rails.application.routes.url_helpers.rails_blob_path(item.file_attachments.first)
ArgumentError: Missing host to link to! Please provide the :host parameter, set default_url_options[:host], or set :only_path to true
Setting host here also doesn't generate a full URL.
In production service_url works, however here in development I get the error:
> item.file_attachments.first.service_url
ArgumentError: Missing host to link to! Please provide the :host parameter, set default_url_options[:host], or set :only_path to true
and specifying a host doesn't help:
item.file_attachments.first.service_url(host:'http://localhost.com')
ArgumentError: unknown keyword: host
I've also tried adding
config.action_mailer.default_url_options = { :host => "localhost:3000" }
config.action_storage.default_url_options = { :host => "localhost:3000" }
Rails.application.routes.default_url_options[:host] = 'localhost:3000'
with no success.
My question is - how can I get the full URL in a manner that works in both development and production? or where do I set the host at?

Active Storage’s disk service expects to find a host for URL generation in ActiveStorage::Current.host.
When you call ActiveStorage::Blob#service_url manually, ensure ActiveStorage::Current.host is set. If you call it from a controller, you can subclass ActiveStorage::BaseController. If that’s not an option, set ActiveStorage::Current.host in a before_action hook:
class Items::FilesController < ApplicationController
before_action do
ActiveStorage::Current.host = request.base_url
end
end
Outside of a controller, use ActiveStorage::Current.set to provide a host:
ActiveStorage::Current.set(host: "https://www.example.com") do
item.file_attachments.first.service_url
end

I needed the url for an image stored in ActiveStorage.
The image was
Post.first.image
Wrong
Post.first.image.url
Post.first.image.service_url
Right
url_for(Post.first.image)

I had a similar problem. I had to force the port to be different due to a container setup. In my case the redirect done by ActiveStorage included the wrong port. For me adjusting the default_url_options worked:
before_action :configure_active_storage_for_docker, if: -> { Rails.env.development? }
def configure_active_storage_for_docker
Rails.application.routes.default_url_options[:port] = 4000
end

Related

Configuring Azure PostgreSQL in Gitlab EE

I am searching for some help in how to configure Azure PostgreSQL DB in a Docker Swarm based Gitlab instance.
Initially, I followed the documentation in https://docs.gitlab.com/13.6/ee/administration/postgresql/external.html. Yet I came to find out that the default provided user is in the form of username, whereas Azure requires it to be in the form of username#hostname. I tried passing the username in the gitlab.rb file (gitlab_rails['db_username'] = 'username#hostname') but it still failed, even after replacing the # with the %40 as URI encoded.
After some extensive searching, I found this documentation - https://docs.gitlab.com/13.6/ee/administration/environment_variables.html, which suggests using the DATABASE_URL environment variable to set the full connection string in the form postgresql://username:password#hostname:port/dbname, which I did and it did solve the issue for Gitlab itself communicating with Azure PostgreSQL (in this case I replaced the username with username%40hostname, according to Azure requirements).
Allas, the success was short lived since then I came to find out that neither Puma and Sidekiq can connect to the database, always throwing the following error:
==> /var/log/gitlab/sidekiq/current <==
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"?
After some searching, I found that gitlab-ctl is generating the following file when starting the Gitlab instance:
# This file is managed by gitlab-ctl. Manual changes will be
# erased! To change the contents below, edit /etc/gitlab/gitlab.rb
# and run `sudo gitlab-ctl reconfigure`.
production:
adapter: postgresql
encoding: unicode
collation:
database: <database>
username: "<username>"
password:
host: "/var/opt/gitlab/postgresql"
port: 5432
socket:
sslmode:
sslcompression: 0
sslrootcert:
sslca:
load_balancing: {"hosts":[]}
prepared_statements: false
statement_limit: 1000
connect_timeout:
variables:
statement_timeout:
(database and username where removed)
Pretty much it ignores the DATABASE_URL env variable and assumes the now non-existing configuration parameters in gitlab.rb.
So, right now, I'm a bit out of options and was wondering if anyone has had a similar issue and, if so, how where you able to overcome this.
Any help is appreciated.
Thanks in advance.
TL/DR: Pass the username#hostname string directly into the gitlab_rails['db_username'] in double quotes. The documentation for connecting to an Azure PostgreSQL in the official Gitlab page is not correct.
So, after some searching and going deep into the Gitlab configuration, I came to find out that the issue is very specific and related with the usage of docker secrets.
In my gitlab.rb configuration file, in the database configuration part, I'm using the following:
### GitLab database settings
###! Docs: https://docs.gitlab.com/omnibus/settings/database.html
###! **Only needed if you use an external database.**
gitlab_rails['db_adapter'] = "postgresql"
gitlab_rails['db_encoding'] = "unicode"
gitlab_rails['db_database'] = File.read('/run/secrets/postgresql_database')
gitlab_rails['db_username'] = File.read('/run/secrets/postgresql_user')
gitlab_rails['db_password'] = File.read('/run/secrets/postgresql_password')
gitlab_rails['db_host'] = File.read('/run/secrets/postgresql_host')
gitlab_rails['db_port'] = File.read('/run/secrets/postgresql_port')
gitlab_rails['db_sslmode'] = 'require'
Now, this exact configuration was used previously for testing purposes and worked (but without the usage of Azure PostgreSQL database). And I'm passing the correct secrets to docker and I've confirmed that the secrets in fact, do exist.
(Sidenote: Also, I've established that Gitlab uses the method ActiveRecord::Base.establish_connection from the Ruby ActiveRecord::Base library in order to connect to the database)
Yet, when using the username#hostname configuration for the user and passing that into the postgresql_user secret, suddenly the ActiveRecord::Base.establish_connection method assumes that the #hostname is the actual hostname to where I want to connect to. And I've confirmed that the secret is being generated correctly inside the docker container
Now, it gets even stranger because if I pass the username#hostname string directly to the gitlab.rb file - gitlab_rails['db_username'] parameter - in double quotes, it suddenly starts connecting without complaining.
So, in short, if using an Azure PostgreSQL database for a dockerized Gitlab instance and using secrets to pass the configuration to the gitlab.rb file, don't pass the username#hostame through a secret, but put it directly in the gitlab.rb file.
I don't know if this is a specific issue of Ruby or of Gitlab (I'm not a Ruby developer), but I did try converting the File.read output to a String, to a symbol, used the File.open('filepath', &:readline) and other shenanigans, but nothing worked. So, if anyone out there would care to add their reason for this, please feel free to do so.
Also, the tutorial provided by Azure - https://learn.microsoft.com/pt-pt/azure/postgresql/connect-ruby - doesn't work with Gitlab, since it complains about the %40.
Hope this can help anyone out there.

How to use the ChromeApp to connect to a node.js server?

I have a Node.js server and I'd like to know how I could do for the ChromeApp to work with it. I tried putting "http://localhost:3000" (server address) on the runtime:
chrome.app.runtime.onLaunched.addListener(function () {
chrome.app.window.create('http://localhost:3000');
});
But it doesn't even launch. Does someone have an idea on what I could do?
Thanks.
You cannot launch external URLs with chrome.app.window.create. In fact if you check the chrome.runtime.lastError property you will see the following error:
The URL used for window creation must be local for security reasons.
I suggest you look into using the <webview> tag as it is much more appropriate for your use-case.

Localization file path error when using tap-i18n with meteor

I am running meteor inside a folder, like this
ROOT_URL="http://localhost:3000/registration" meteor
Also, i am using tap:i18n package for internationalisation support. The problem is that tap_i18n doesn't update the url for the localisation files and still makes request to http://localhost:3000/tap-i18n/en-US.json which is not a valid address, and hence throws 404 error. It should make request to http://localhost:3000/registration/tap-i18n/en-US.json. Notice the registration folder that was passed via ROOT_URL while starting meteor.
How can i tell tap_i18n package to honor ROOT_URL?
Ranjan,
I've setup a small demo project with some explanations on how to achieve your configuration. Please let me know if you could solve your issue.
How to configure tap:i18n with custom ROOT_URL
Check the configuration, you can set the i18n_files_route parameter
Configuring tap-i18n
To configure tap-i18n add to it a file named project-tap.i18n.
This JSON can have the following properties. All of them are optional.
The values bellow are the defaults.
project-root/project-tap.i18n
----------------------------- {
"helper_name": "_",
"supported_languages": null,
"i18n_files_route": "/tap-i18n",
"cdn_path": null
}
Source link for configuration

Serve out swagger-ui from nodejs/express project

I would like to use the swagger-ui dist 'as-is'...well almost as-is.
Pulled down the latest release from github (2.0.24) and stuck it in a folder in my app. I then server it out statically with express:
app.use('/swagger', express.static('./node_modules/swagger-ui/dist'));
That works as expected when I go to:
https://mydomain.com/swagger
However I want to populate the url field to my swagger json dynamically. IE I may deploy to different domains:
https://mydomain.com/api-docs
https://otherdomain.com/api-docs
And when I visit:
https://mydomain.com/swagger
https://otherdomain.com/swagger
I would like to dynamically set the url.
Is that possible?
Assuming the /api-docs (or swagger.json) are always on the same path, and only the domain changes, you can set the url parameter of the SwaggerUi object to "/path/to/api-docs" or "/path/to/swagger.json"instead of a full URL. That would make the UI load that path as relative to the domain the UI is hosted on.
For reference, I'm leaving the original answer as well, as it may prove useful in some cases.
You can use the url parameter to set the URL the UI should load.
That is, if you're hosting it under https://mydomain.com/swagger you can use https://mydomain.com/swagger?url=https://mydomain.com/api-docs and https://mydomain.com/swagger?https://otherdomain.com/api-docs to point at the different locations.
However, as far as I know, this functionality is only available at the current alpha version (which should be stable enough) and not with 2.0.24 that you use (though it's worth testing).
Another method would be to use the swagger-ui middleware located in the swagger-tool.
let swaggerUi = require('../node_modules/swagger-tools/middleware/swagger-ui');
app.use(swaggerUi(config.swagger));
The variable config.swagger contains the swagger.yaml or swagger.json. I have in my setting
let config = {
appRoot: __dirname,
swagger: require('./api/swagger/swagger.js')
};
Note: I am using the require('swagger-express-mw') module
You could try with this on index.html file of the swagger-ui... It works for me.
if (url && url.length > 1) {
url = decodeURIComponent(url[1]);
} else {
url = window.location.origin + "/path/to/swagger.json";
}

How does one set a proxy in lazybones?

I'm behind a firewall and lazybones can't reach its repository without a proxy.
I've searched the source and can't seem to find any reference to a proxy that seems to be relevant.
Support was officially added in version 0.8.1 of Lazybones, albeit via a general mechanism to add arbitrary system properties to the application in its configuration file, ~/.lazybones/config.groovy.
You can read about the details in the project README, but in essence, simply add the following to your config.groovy file:
systemProp {
http {
proxyHost = "localhost"
proxyPort = 8181
}
https {
proxyHost = "localhost"
proxyPort = 8181
}
}
You can use the systemProp. prefix to add any system properties to Lazybones, similar to the way it works in Gradle.
Is that what You're looking for? Basically You need to add some properties to gradle.properties file.
I am using Cygwin on Windows and I have modified the last line of
~/.gvm/lazybones/current/bin/lazybones
to say
exec "$JAVACMD" "${JVM_OPTS[#]}" -classpath "$CLASSPATH" "-Dhttp.proxyHost=127.0.0.1" "-Dhttp.proxyPort=8888" "-Dhttp.nonProxyHosts=localhost|127.0.0.1" uk.co.cacoethes.lazybones.LazybonesMain "$#"
Please note the quotes around the options. It works very well with my local Fiddler installation.
I have found no better way to enable proxy support due to the way the script is using eval. Maybe a more experienced shell script programmer can come up with a more elegant solution.
I was able to get out through the proxy setting the environment settings of
Picked up JAVA_TOOL_OPTIONS: -Dhttp.proxyHost=127.0.0.1 -Dhttp.proxyPort=8080
-Dhttp.nonProxyHosts="lmig.com" -Dhttps.proxyHost=127.0.0.1 -Dhttps.proxyPort=8080
unfortunately my environment requires authentication so I couldn't provide the complete proxy this way. I first ran "OWASP Zed Attach Proxy (ZAP)" which allowed me to run a proxy on my own machine (at port 8080) which then provided the complete authentication required.
This was able to then run the complete "lazybones list" command which retrieved the contents of the respositories.
Unfortunately I was not able to create an application from those templates becuase bintray required a login (though an anonymous login would do) and couldn't seem to get an additional level of authentication (I received "Unauthorized" from bintray)

Resources