My Bixby capsule.properties is currently set up like so:
capsule.config.mode=default
And I have distinct dev/prod config/secrets set up and the values look correct to me. But despite double-checking the values on submission to marketplace the credentials are failing in prod. Is there an example of how the capsule.properties is supposed to be set up to account for different dbs in dev & prod?
Among other things, I am confused by the properties precedence explanation in https://bixbydevelopers.com/dev/docs/reference/ref-topics/capsule-config#property-precedence, which seems to say that it will grab dev first regardless whether it's in prod or dev.
Regarding capsule.properties:
It is not used to select DEV or PROD. I would think it is more like a final fallback place for capsule to look for a property when config.get() failed to fetch info from DEV or PROD.
Regarding Property Precedence:
In case of IDE sync or revision override of private submission, or on-device testing with private submission:
DEV > PROD > capsule.properties
In case of marketplace capsule, revision override of public submission, or on-device testing with public submission:
PROD > capsule.properties
Basically, DEV is not visible when dealing with public submission.
Related
For context, I'm in the process of updating a Rails app to 5.2 and then to 6.0.
I'm updating my credentials to use the config/credentials.yml.enc and config/master.key defaults with Rails 5.2+ apps.
The Rails docs state:
In test and development applications get a secret_key_base derived from the app name. Other environments must use a random key present in config/credentials.yml.enc
(emphasis added)
This leads me to think that in production the SECRET_KEY_BASE value is required to be read from Rails.application.credentials.secret_key_base via config/credentials.yml.enc. In test and development environments, the secret_base_key is essentially "irrelevant", since it's calculated from the app name.
However, when I was looking at the Rails source code, it reads:
def key
read_env_key || read_key_file || handle_missing_key
end
That seems to say the order of reading values is:
ENV["SECRET_BASE_KEY"]
Rails.application.credentials.secret_base_key
Raise error
I use Heroku for my hosting, and have a ENV["SECRET_BASE_KEY"] env variable that stores this secret value.
Questions
If I have both ENV["SECRET_BASE_KEY"] and Rails.application.credentials.secret_base_key set, which one takes priority?
Is using the ENV var going to be deprecated at some point?
I have lots of environment-specific ENV variables because I don't want to use my production accounts in development for AWS S3 buckets, stripe accounts, etc. The flat-file format of credentials.yml.enc seems to assume developers only need to access these 3rd-party APIs in production. Is there an accepted format to handle environment-specific credentials yet in Rails?
I read through the comment threads on DHH's original PR as well as a linked PR that says it implements environment-specific credentials, but the docs don't mention this implementation so I'm not certain if it's the standard or if it's going to go away sometime soon.
I have added a slack service template to my gitlab-ce via the administrators interface. Everything worked. I activated "active by default".
As a result, all projects now push notifications into the main channel. And those are a lot.
Changing the service template configuration is not inherit by the projects. Thus effectively rendering me unable to revert the setting via the admin UI.
So, how can I disable the slack service integrations for all projects before it drives all of us crazy because the general-channel is just flooded by gitlab?
That is followed by issue 40921:
Allow to apply service template to all projects
Sometimes users want to apply the same integration like JIRA across all GitLab projects, currently templates are the only way to do that through the UI, but project integration templates only works for projects that have been created after it.
Only workaround:
I had this issue too. One workaround is to patch the database like this:
sudo gitlab-rails dbconsole
UPDATE services SET properties = replace(properties, 'http://someoldurl.com', 'https://somenewurl.com');
(to be adapted to your slack setting: this is just an example)
Following #VonC's advice to dive into the depths of psql and hack my way through, I finally ran following command to disable the active flag for the relevant services (slack and mattermost in our case):
sudo gitlab-rails dbconsole
UPDATE "services" SET active = FALSE WHERE type LIKE 'SlackService' AND active = TRUE;
UPDATE "services" SET active = FALSE WHERE type LIKE 'SlackSlashCommandsService' AND active = TRUE;
UPDATE "services" SET active = FALSE WHERE type LIKE 'MattermostService' AND active = TRUE;
UPDATE "services" SET active = FALSE WHERE type LIKE 'MattermostSlashCommandsService' AND active = TRUE;
we have a client with a daily import of web contents in their site, and every day, after this import, they must run a staging to transfer the content to the production site.
Is there a way to trigger the staging functionality programmatically?
Thank you in advance,
Harry
I think, here is an answer to the question:
We scheduled a staging and we looked at the job entry in the quartz tables. It seems that the class that handles the job is the PersistedQuartzSchedulerEngineInstance and in there there is a call of the method StagingUtil.copyRemoteLayouts that doesn't use any portletrequests in the parameters.
That is exactly what I have been searching for. The only problem is to define the parameters map, that contains all the selections of the UI, when defining a scheduled publish to remote.
This method will trigger a staging by running a background task.
There is method available .
StagingLocalServiceUtil.enableLocalStaging(long userId, Group
liveGroup, boolean branchingPublic, boolean branchingPrivate, ServiceContext serviceContext)
As per docs Explanation of Parameters:
userId : It is current userId.
liveGroup : It is group(site) object for that you need to enable staging feature.
branchingPublic : set this to true if you want to enable page versioning for the public pages.
branchingPrivate: set this to true if you want to enable page versioning for the private pages.
As described in this article: https://azure.microsoft.com/en-us/blog/windows-azure-web-sites-how-application-strings-and-connection-strings-work/, Azure Web Apps/Web Sites/Web Jobs can take their configuration settings (appSettings, connectionString) from environment variables instead of app.config/web.config.
For example, if an environment variable named "APPSETTING_appSettingKey" exists, it will override the following setting from app.config/web.config:
<appSettings>
<add key="appSettingKey" value="defaultValue" />
</appSettings>
This works fine once the application is deployed in Azure, but I would like to use the same method when testing locally.
I tried to emulate this in a local command line:
> set APPSETTING_appSettingKey=overridedValue
> MyWebJob.exe
The web job accesses this setting using:
ConfigurationManager.AppSettings["appSettingKey"]
When running in Azure, it reads the value "overridedValue" as expected, but locally it reads the value "defaultValue" from the app.config file.
Should I expect this to work, or is this implemented only under an Azure environment?
I could obviously create an abstraction over ConfigurationManager that emulates this, but this wouldn't work when calling code that needs a connection string name instead of a connection string value. Also, I want to use the same method regardless of the environment to simplify management of settings.
There are 3 reasons why I need this:
1) I don't like the idea of deploying to production a web.config file that references connection strings, etc for a developement environment, because there's a risk of an error that would cause the development settings (in web.config) to be used in production (production web app connecting to development database, etc), for example if an environment variable is named incorrectly (after renaming the setting in web.config but forgetting to rename it in environment variables)
2) I'm trying to setup development environments where each developer has his own isolated cloud resources (storage account, databases,...). Currently, everyone has to manually edit his .config files to reference the correct resources, and be careful when checking-in or merging changes to these files.
3) A solution can have multiple projects that need to duplicate the same settings (main web app, web jobs, integration test projects,...). This causes a lot of work to ensure updated settings are replicated across all files.
This would be simplified if there was an environment-independent .config file without any actual configuration, each developer would configure a set of environment variables once and be able to use them for all parts of a solution.
Yes, this special transformation of environment variables into config values is done via a component that is specific to Azure WebApps and won't be in play locally.
Generally people are fine with the local behavior this produces - locally you are reading from config settings as usual, but in Azure you're reading from secure settings that were configured via the App Settings portal blade (so these settings aren't in your source code).
You could write an abstraction over this if you wish, E.g. the WebJobs SDK actually does this internally (code here).
When I am developing locally and want to consistantly use Environment.GetEnvironmentVariable. In my static class Main I have the following code:
if (config.IsDevelopment)
{
config.UseDevelopmentSettings();
Environment.SetEnvironmentVariable("UseDevelopmentSettings", "true");
}
Then in my static class Functions I add a static constructor and in there I call the static method below:
static void AddAppSettingsToEnvironmentVariables()
{
String useDevelopmentSettings = Environment.GetEnvironmentVariable("UseDevelopmentSettings"); ;
if (!(String.IsNullOrEmpty(useDevelopmentSettings)))
{
foreach (String key in ConfigurationManager.AppSettings.AllKeys)
{
Environment.SetEnvironmentVariable(key, ConfigurationManager.AppSettings[key]);
}
}
}
The code is small enough that I can simply comment it out before I test in Azure.
If you want to test the application with the value that will be used in Azure portal AppSettings/Connection String. I would recommend use HostingEnvironment.IsDevelopmentEnvironment. To ensure it will work, please change the <compilation debug="true" targetFramework="4.5.2" /> to <compilation debug="false" targetFramework="4.5.2" />. set the value with the same value in Azure portal if (HostingEnvironment.IsDevelopmentEnvironment == false). I have try with a simple project, hope it helps:
public ActionResult Index()
{
if (HostingEnvironment.IsDevelopmentEnvironment == true)
{
ViewBag.Message = "Is development.";
}
else
{
ViewBag.Message = "Azure environment.";
}
return View();
}
Here is the result:
I would like all my projects in a GitLab group to have shared configuration for a webhook:
<MY_JENKINS_INSTANCE>/git/notifyCommit?url=$CHANGED_REPOSITORY
GitLab webhook documentation suggests it should be possible:
If you have a big set of projects in the one group then it will be convenient for you to configure web hooks globally for the whole group. You can add the group level web hooks on the group settings page.
That sound exactly like what I am after though I see no such thing on group settings page in my gitlab 7.0.0. I was not able to find out if this feature is not newer than that in the changelog.
Does the feature exist? How do I use it?
That's possible in the enterprise version only:
In GitLab Enterprise Edition you can configure web hooks globally for the whole group. You can add the group level web hooks on the group settings page Settings > Web Hooks.
Following up on #VertigoRay's comments, here's a procedure to do it using GitLab CE API:
Have, or create an user in GitLab and a personal access token with api scope:
User (top right avatar) > Settings (menu) > Access tokens (sidebar)
Check api scope (checkbox)
Click on create personal access token (button)
<my_personal_token> is the value in Your New Personal Access Token (text field)
Perform an HTTP request to get all projects:
GET https://gitlab.example.com/api/v4/projects
Private-Token: <my_personal_token>
Accept: application/json
For each project in the response:
id which is the <project_ID> to be used in the next request URL
Convert the value of ssh_url_to_repo so that it becomes URL encoded <encoded_ssh_url>
Example: ssh://git#example.com:1234/group/alpha.git becomes ssh%3A%2F%2Fgit%40example.com%3A1234%2Fgroup%2Falpha.git
For each project, perform an HTTP request to create a hook:
POST https://gitlab.example.com/api/v4/projects/<project_ID>/hooks
Private-Token: <my_personal_token>
Content-Type: application/json
{
"url": "https://jenkins.example.com/git/notifyCommit?url=<encoded_ssh_url>",
"enable_ssl_verification": true
}
This should be scripted in the langage of your choice.
Not suitable as a persistent solution, but this might be useful for someone looking for a one-time change (from the raketasks documentation):
Add a webhook for projects in a given NAMESPACE
# omnibus-gitlab
sudo gitlab-rake gitlab:web_hook:add URL="http://example.com/hook" NAMESPACE=acme
# source installations
bundle exec rake gitlab:web_hook:add URL="http://example.com/hook" NAMESPACE=acme RAILS_ENV=production