I'm using azurerm_postgresql_flexible_server and creating a database. I'm not seeing an output defined though that gives me the url of the database so that I might pass it into a created server as an environment variable and access the database.
How would I get the Database URL after creation?
I just want to add on #ceejayoz's answer what is happening under the hood to complete this question with an answer.
To get the database URL from a azurerm_postgresql_flexible_server resource in Terraform you can use the attribute fdqn which you add to the respective database URL:
fqdn = azurerm_postgresql_flexible_server.example.fully_qualified_domain_name
and pass it to the db_url
db_url = "postgres://admin:password#${fqdn}/db"
May it help others too!
Related
I'm working on automating the rotation of my azure function app's host key, which is used to maintain a more secure connection between my API Management and my function apps. The issue is that I can not figure out how to accomplish this based on the lack of clear documentation. I found a document for how to create a key for a specific function within the function app, but not for the host level. I've tried using the web ui resource manager to figure out what the proper values are, but host seems to have no values available by GET request to help me see what the formatting needs to be. In fact, I can't find any reference to my function app's host keys anywhere in the resource manager UI. (Of course I can in the portal).
I don't care if it's powershell, bicep, ARM, terraform azapi, whatever, I'd just like to find a way to accomplish the creation of a new hostkey so that I can control it's rotation with terraform. Does anyone know how to accomplish this?
Right now my attempt looks like
resource "azapi_resource" "function_host_key" {
type = "Microsoft.Web/sites/host/functionkeys#2018-11-01"
name = "${azurerm_windows_function_app.api_function.name}-host-key"
parent_id = "${azurerm_windows_function_app.api_function.id}/host"
body = jsonencode({
properties = {
name = "test-key-terraform"
value = "asdfasdfasdfasdfasdfasdfasdf"
}
})
}
I also tried
resource "azapi_resource" "function_host_key" {
type = "Microsoft.Web/sites#2018-11-01"
name = "${azurerm_windows_function_app.api_function.name}-host-key"
parent_id = "${azurerm_windows_function_app.api_function.id}/functionsAppKeys"
location = var.region
}
since it said the body was invalid, but this also throws an error due to there being no body. I'm wondering if this just isn't possible.
I also just tried
resource "azapi_resource" "function_host_key" {
type = "Microsoft.Web/host/functionkeys#2018-11-01"
name = "${azurerm_windows_function_app.api_function.name}-host-key"
parent_id = "${azurerm_windows_function_app.api_function.id}/host"
location = var.region
}
and the result said that it was expecting
parent_id of `parent_id is invalid`: expect ID of `Microsoft.Web/host`
so I'm not sure what that parent_id should be.
I found an example through a bash/powershell script using the azure rest API, but I get a 403 error when I attempt to do it, I can only assume because my function app is secured, but I'm not sure a good way to determine that.
There must be a way to create a key programmatically...
UPDATE
I believe that this has been purposely made impossible now to do with terraform and I need to, as grose and backwards as it may be, use a CLI command in my pipeline. I understand you can do this, but it is (ofc my opinion) that if I am using terraform, I have terraform manage something, not have random CLI commands outside of terraform doing things that TF should be able to manage.
I created a key using az functionapp keys set and that worked, and the output explicitly stated that the type of resource which was created was Microsoft.Web/sites/host/functionKeys, so I went to the Azure Resource Explorer to see what versions were available for this type, since it clearly exists.. and found that nope, azure does not have it listed.
What confuses me is that I see this being done w/ ARM templates and I believe that my code matches theirs, just I'm using AZAPI.. and I get a not found error. Giving up for now
My realtime database URL is like https://myprojectid.firebaseio.com
When I start my node.js server I have error message :
#firebase/database: FIREBASE WARNING: Firebase error. Please ensure
that you have the URL of your Firebase Realtime Database instance
configured correctly.
(https://myprojectid-default-rtdb.firebaseio.com/)
My Firebase config is :
# Firebase
apiKey=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXx
authDomain=myprojectid.firebaseapp.com
databaseURL=https://myprojectid.firebaseio.com
projectId=myprojectid
storageBucket=myprojectid.appspot.com
messagingSenderId=00000000000000
appId=1:0000000000000:web:000000000000000
When I'm trying to debug npm library, I can see that namespace is indeed "myprojectid-default-rtdb" but can't find where to change it.
I tried to set databaseURL with query string ?ns=myprojectid but not better.
By searching on Google I could read :
For recently created Firebase projects the default database URI
usually has the format https://
-default-rtdb.firebaseio.com. Databases in projects
created before September 2020 had the default database URI
https://.firebaseio.com. For backward compatibility
reasons, if you don’t specify a database URI, the SDK will use the
project ID defined in the Service Account JSON file to automatically
generate it
Could you help me to find a solution please?
There's no way to change the name of the default database instance that is created in your Firebase project. If your project is on the paid plan, you can add additional database instances where you fully control the name, but not for the default one.
You will have to update your project configuration to use the correct database URL. If the database in the US data center, that'd be:
databaseURL=https://myprojectid-default-rtdb.firebaseio.com
I am searching for some help in how to configure Azure PostgreSQL DB in a Docker Swarm based Gitlab instance.
Initially, I followed the documentation in https://docs.gitlab.com/13.6/ee/administration/postgresql/external.html. Yet I came to find out that the default provided user is in the form of username, whereas Azure requires it to be in the form of username#hostname. I tried passing the username in the gitlab.rb file (gitlab_rails['db_username'] = 'username#hostname') but it still failed, even after replacing the # with the %40 as URI encoded.
After some extensive searching, I found this documentation - https://docs.gitlab.com/13.6/ee/administration/environment_variables.html, which suggests using the DATABASE_URL environment variable to set the full connection string in the form postgresql://username:password#hostname:port/dbname, which I did and it did solve the issue for Gitlab itself communicating with Azure PostgreSQL (in this case I replaced the username with username%40hostname, according to Azure requirements).
Allas, the success was short lived since then I came to find out that neither Puma and Sidekiq can connect to the database, always throwing the following error:
==> /var/log/gitlab/sidekiq/current <==
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/opt/gitlab/postgresql/.s.PGSQL.5432"?
After some searching, I found that gitlab-ctl is generating the following file when starting the Gitlab instance:
# This file is managed by gitlab-ctl. Manual changes will be
# erased! To change the contents below, edit /etc/gitlab/gitlab.rb
# and run `sudo gitlab-ctl reconfigure`.
production:
adapter: postgresql
encoding: unicode
collation:
database: <database>
username: "<username>"
password:
host: "/var/opt/gitlab/postgresql"
port: 5432
socket:
sslmode:
sslcompression: 0
sslrootcert:
sslca:
load_balancing: {"hosts":[]}
prepared_statements: false
statement_limit: 1000
connect_timeout:
variables:
statement_timeout:
(database and username where removed)
Pretty much it ignores the DATABASE_URL env variable and assumes the now non-existing configuration parameters in gitlab.rb.
So, right now, I'm a bit out of options and was wondering if anyone has had a similar issue and, if so, how where you able to overcome this.
Any help is appreciated.
Thanks in advance.
TL/DR: Pass the username#hostname string directly into the gitlab_rails['db_username'] in double quotes. The documentation for connecting to an Azure PostgreSQL in the official Gitlab page is not correct.
So, after some searching and going deep into the Gitlab configuration, I came to find out that the issue is very specific and related with the usage of docker secrets.
In my gitlab.rb configuration file, in the database configuration part, I'm using the following:
### GitLab database settings
###! Docs: https://docs.gitlab.com/omnibus/settings/database.html
###! **Only needed if you use an external database.**
gitlab_rails['db_adapter'] = "postgresql"
gitlab_rails['db_encoding'] = "unicode"
gitlab_rails['db_database'] = File.read('/run/secrets/postgresql_database')
gitlab_rails['db_username'] = File.read('/run/secrets/postgresql_user')
gitlab_rails['db_password'] = File.read('/run/secrets/postgresql_password')
gitlab_rails['db_host'] = File.read('/run/secrets/postgresql_host')
gitlab_rails['db_port'] = File.read('/run/secrets/postgresql_port')
gitlab_rails['db_sslmode'] = 'require'
Now, this exact configuration was used previously for testing purposes and worked (but without the usage of Azure PostgreSQL database). And I'm passing the correct secrets to docker and I've confirmed that the secrets in fact, do exist.
(Sidenote: Also, I've established that Gitlab uses the method ActiveRecord::Base.establish_connection from the Ruby ActiveRecord::Base library in order to connect to the database)
Yet, when using the username#hostname configuration for the user and passing that into the postgresql_user secret, suddenly the ActiveRecord::Base.establish_connection method assumes that the #hostname is the actual hostname to where I want to connect to. And I've confirmed that the secret is being generated correctly inside the docker container
Now, it gets even stranger because if I pass the username#hostname string directly to the gitlab.rb file - gitlab_rails['db_username'] parameter - in double quotes, it suddenly starts connecting without complaining.
So, in short, if using an Azure PostgreSQL database for a dockerized Gitlab instance and using secrets to pass the configuration to the gitlab.rb file, don't pass the username#hostame through a secret, but put it directly in the gitlab.rb file.
I don't know if this is a specific issue of Ruby or of Gitlab (I'm not a Ruby developer), but I did try converting the File.read output to a String, to a symbol, used the File.open('filepath', &:readline) and other shenanigans, but nothing worked. So, if anyone out there would care to add their reason for this, please feel free to do so.
Also, the tutorial provided by Azure - https://learn.microsoft.com/pt-pt/azure/postgresql/connect-ruby - doesn't work with Gitlab, since it complains about the %40.
Hope this can help anyone out there.
I have cloned the Concept Insights demo from Bluemix and made some minor changes to use my own corpus. It runs OK locally, but when I deploy it to Bluemix I get an authorization error when it tries to access my corpus. I'm certain that the error is a result of the early call in app.js to bluemix.getServiceCreds('concept_insights'), which apparently replaces my service credentials with some that must be stored in the environment on Bluemix.
Can someone explain the purpose of this function, and the proper approach to what I am trying to do? I could probably just delete the call to that function, but I'm afraid that I may be missing part of the larger picture if I do. Is this a way to keep my credentials out of the code base? If so, how do I make it work?
bluemix.getServiceCreds('concept_insights') gets the concept_insights service credentials from the VCAP_SERVICES variable that is created by Bluemix. (see VCAP_SERVICES)
You probably want to use the credentials from the environment instead of hardcoding them in your app.js file.
When your app runs locally you hardcode the credentials in app.js, but when it runs in Bluemix those credentials are overwritten. If you don't want this to happen remove the bluemix.getServiceCreds('concept_insights')
var credentials = {
url: 'https://gateway.watsonplatform.net/concept-insights/api',
username: '<username>',
password: '<password>',
version: 'v2'
};
When creating a service make sure you use the Standard plan.
If you use the Beta plan you will have to use https://gateway.watsonplatform.net/concept-insights/api as url.
In particular I'm trying to access the doc_count (of the requested db), but I'd also like to know how to access the rest of the server, as much as possible.
You can get database document's count from req.info.doc_count variable. See Request object structure for more info. To easily inspect your own requests you may use dummy update function:
function(doc, req){
return [null, {"json": req}]
}
However, this is the only server data that you can access from the update functions.