Using feature toggles in Azure Feature Manager gives the option to configure a label when creating a feature, in .NET to use that label the only option I could find is to set the label during startup when configuring Azure App Configuration which can be done like:
config.AddAzureAppConfiguration(
options =>
{
options.Connect(settings.GetConnectionString("Config"))
.UseFeatureFlags(o =>
{
o.Label = "Test";
o.CacheExpirationInterval = TimeSpan.FromSeconds(1);
});
},
The problem with that approach is during runtime we can't change the label, as it is pre-configured during startup.
The question is how can we manage different labels, let's say I have 100 labels and I want my application to evaluate feature toggle with respect to its label there is no way I could find to achieve that.
To evaluate feature toggle we call feature manager like so
await _featureManager.IsEnabledAsync(setting);
I'd say you can't, and this is not a bad thing. Labels are there so that you can have one feature flag saved two times, possible with different values and filters. E.g.:
MyFlag (Label: test) true
MyFlag (Label: production) false
or
MyFlag (Label: america) true
MyFlag (Label: asia) false
So you should use labels to save different values for one feature for different stages or for different regions or whatever other use cases you may have. You are not supposed to switch between labels in a running application.
Related
I'm starting with BDD (cucumber + capybara + selenium chromedriver) and TDD (rspec) with factory_bot and I'm getting an error on cucumber features - step_definitions.
uninitialized constant User (NameError)
With TDD, everything is ok, the factory bot is working fine. The problem is with the cucumber.
factories.rb
FactoryBot.define do
factory :user_role do
name {"Admin"}
query_name {"admin"}
end
factory :user do
id {1}
first_name {"Mary"}
last_name {"Jane"}
email {"mary_jane#gmail.com"}
password {"123"}
user_role_id {1}
created_at {'1/04/2020'}
end
end
support/env.rb
require 'capybara'
require 'capybara/cucumber'
require 'selenium-webdriver'
require 'factory_bot_rails'
Capybara.register_driver :selenium do |app|
Capybara::Selenium::Driver.new(app, browser: :chrome)
end
Capybara.configure do |config|
config.default_driver = :selenium
end
Capybara.javascript_driver = :chrome
World(FactoryBot::Syntax::Methods)
And the problem is happening here
support/hooks.rb
Before '#admin_login' do
#user = create(:user)
end
step_definitions/admin_login.rb
Given("a registered user with the email {string} with password {string} exists") do |email, password|
#user
end
I don't know why, but I can't access the user using cucumber and factory_bot.
Anybody could help me please?
I think I need to configure something on the cucumber.
What do you think guys?
First of all Luke is correct about this being a setup issue. The error is telling you that the User model cannot be found which probably means Rails is not yet loaded. I can't remember the exact details of how cucumber-rails works but one of the things it does is to make sure that each scenario becomes an extension of a Rails integration test. This ensures that all of the Rails auto-loading has taken place and that these things are available.
Secondly I'd suggest you start simpler and use a step to create your registered user rather than using a tag. Using tags for setup is a Cucumber anti-pattern.
Finally, and more controversially I'd suggest that you don't use factory-bot when cuking. FactoryBot uses a separate configuration to create model objects directly in the datastore. This bypasses any application logic around the creation of these objects, which means the objects created by FactoryBot are going to end up being different from the objects created by your application. In real life object creation involves things like auditing, sending emails, conditional logic etc. etc. To use FactoryBot you either have to duplicate that additional creation logic and behavior or ignore it (both choices are undesirable).
You can create objects for cuking much more effectively (and quicker) by using the following pattern.
Each create method in the Rails controller delegates its work to a service object e.g.
UserController
def create
#user = CreateUserService.new(params).call
end
end
Then have your cukes use a helper module to create things for you. This module will provide tools for your steps to create users, using the above service
module UserStepHelper
def create_user(params)
CreateUserService.new(default_params.merge(params))
end
def default_params
{
...
}
end
end
World UserStepHelper
Given 'there is a registered user' do
#registered_user = create_user
end
and then use that step in the background of your feature e.g.
Background:
Given there is a registered user
And I am an admin
Scenario: Admin can see registered users
When I login and view users
Then I should see a user
Notice the absence of tagging here. Its not desirable or necessary here.
You can see an extension of this approach in a sample application I did for a CukeUp talk in 2013 here https://github.com/diabolo/cuke_up/commits/master. If you follow this commit by commit starting from first commit at the bottom you will get quite a good guide to setting up a rails project with cucumber in just the first 4 or 4 commits. If you follow it through to the end (22 commits) you'll get a basic powerful framework for creating and using model objects when cuking. I realize the project is ancient and that obviously you will have to use modern versions of everything, but the principles still apply, and I use this approach in all my work and having been doing so for at least 10 years.
So if you're using rails, it's probably advised to use cucumber-rails over cucumber. This is probably an issue where your User models have not been auto-loaded in.
Cucumber auto-loads all ruby files underneath features, with env.rb first, it's almost certainly an issue with load order / load location
I have a bit of structural dilemma in soap. When running tests, it can be possible to run tests at project, test suite or test case level.
Now currently what happens is that we can run a whole project via project level and it will display a prompt box to select an endpoint (through a project level setup script and produces a project report using the project level tear down script).
However, it may be possible that the tester may not want to run a whole project and only wants to run a test suite or even a test case. Now it may be possible that the tester may only want to run only a test suite or even only a test case. Now it would be a hassle disabling suites or cases you don't want to run.
Now the problem i have is that if I start putting prompt boxes to select endpoints at suite or case level, everytime we hit a suite or case, it will always ask for an endpoint. Another thing is that I am thinking not creating suite or test case reposts because if running many suites or cases one by one, it is just an overkill on reporting.
I like your thinking on this, but I was speaking with my professional colleague and what we're thinking is this:
Add the below code for all test suites and test case level in their relevant setup scripts where it asks for endpoint (this is same code used in project set up script for selecting endpoint):
import com.eviware.soapui.support.*
def alert = com.eviware.soapui.support.UISupport
def urls = []
project.properties.each
{
if (it.value.name.startsWith("BASE_URL_"))
{
urls.push(it.value.name.replace("BASE_URL_", ""))
}
}
def urlName = alert.prompt("Please select the environment URL", "Enter URL", urls)
if (urlName)
{
def url = project.getPropertyValue("BASE_URL_" + urlName)
def urlBase = "BASE_URL_" + urlName
project.setPropertyValue("BASE_URL", url)
switch (urlBase){
case "BASE_URL_TEST":
project.setPropertyValue("DOMAIN_NAME", "TEST");
break;
case "BASE_URL_STAGE":
project.setPropertyValue("DOMAIN_NAME", "STAGE");
break;
default:
project.setPropertyValue("DOMAIN_NAME", "NO DOMAIN");
break;
}
}
else
{
log.warn 'haven\'t received user input'
log.warn 'No base URL is selected or cancelled, try again'
assert false
}
Now what we add is the following and we may need to use properties but again see what you think is best:
If test is ran at project level, it will prompt to select endpoint through project setup script but it will not ask for selecting endpoint through test suite or test case setup script. So it's only a single endpoint selection
If test is ran at suite level, it will prompt to select endpoint through project setup script but it will not ask for selecting endpoint through test case setup script. So it's only a single endpoint selection
For running at test case level, well it only runs for that test case so it's at the lowest level as it asks for an endpoint for that test case.
We can't have setup scripts disabled at any level because there maybe over code in those setup script that will need to be exectued, we just need a way to say depending on which level, don't ask for selecting endpoints at lower levels.
Seems complicated to implement but does anyone know best way to implement this or do they even have a better idea than this theory?
Thanks
For a moment, let us assume you get it done for all levels (project, suite, and each case). May be you forgot about the step level ;-)
Do you have any Pros in your approach?, for me, NO.
Cons in your approach:
Each time user executes a test (be it project / suite / any test case), engineer needs to select value from the drop down, which is unwanted though testing against the same server as previous test case & little annoying.
Test execution requires manual intervention each time test execution is invoked.
User Interface is required as drop down being used.
Will be come road block / hurdle for end to end automation or to achieve automation.
Test execution can't done in headless mode. And this is important if you need to use Continuous Integration tools.
Proposed Approach :-
If I have to do the above, I would do the following. That would be clean, damn simple, no such complications would arise that you had mentioned in the long summary.
Looks there are following project properties defined with addresses of the test servers:
BASE_URL_TEST
BASE_URL_STAGE
There is also another project property defined BASE_URL and all the above logic is to allow the user to select the value from above properties to base URL value.
Now all user have to do is change the value for project property BASE_URL. I would think just user have to set one of the below value by hand what he / she needed as (one of them) before proceeding with their tests.
${#Project#BASE_URL_TEST} or
${#Project#BASE_URL_STAGE}
NOTE that a property value can be referred into another property by the use of Property Expansion like above.
With the above, user can set whatever is needed and change only if required or have to change the test server.
No setup script at any level is required any more, and just simply change the value of the property.
Properties are given to make to life simple, which can be used in N number of places and maintain the project easily.
Most Importantly, overcome the Cons mentioned in the beginning.
It is general practice that SoapUI is used to design the tests, and SOAPUI_HOME/bin/testrunner.bat or .sh utility to execute the tests in command line mode and that is the way to achieve Continuous Integration.
That's why use of properties helps here to achieve the above without any issues.
Even simple:
Just have one project property BASE_URL (remove others), user have to just edit the property value and have the test server name / IP address and is done for once, say http://testjuniper. Isn't it dead simple?
And I believe, the engineer would definitely know which server he / she is using to execute the tests.
Having said that, now user do not have to bother at all, irrespective of executing a project / suite / test case, as long as testing is carried out against the same server / environment.
Once, the test execution is finished against TEST environment, the engineer may move on to other environment say STAGING, just change BASE_URL property value accordingly.
Let me stress that I am not a programmer but I like messing around with things. I've been using #ifttt and #nest for years and recently started using #smartthings to do cool things in my house.
I wanted to power off devices such as my lights and water heater based on leaving my house. Rather than having this depend on one device such as a phone or keyfoob, I wanted to use the nest "auto-away" feature.
Auto-away doesn't appear to be exposed to #ifttt or #smartthings. I've asked #nestsupport and they told me to come here :-o.
Does anyone from nest developer team know when developers and other products will be able to consume this from he nest device? Its a real shame that after several years this isn't exposed yet. Not only that but it could be an additional selling point to integrate and turn on/off items in your house.
Thank
I'm not from the Nest developer team, but I've played around with the Nest API in the past, and use it to plot my energy usage.
The 'auto away' information is already accessible in the API, and looks to be used in a number of IFTTT recipes:
https://ifttt.com/recipes/search?q=auto+away&ac=false
Within the (JSON) data received back in the API, the 'auto away' status is accessible via;
shared->{serial_number}->auto_away
This is set as a boolean (0 or 1).
If you like messing around with code, and know any PHP, then this PHP class for the Nest API is very useful at grabbing all information etc;
https://github.com/gboudreau/nest-api
Auto-Away is and always has been readable https://developer.nest.com/documentation/cloud/api-overview#away
There are a few ways you could go about doing this, but if you're writing up a SmartApp just for your own uses, I'd suggest piggybacking off of one of the existing device types for the Nest on SmartThings. As a quick example, I'll use the one that I use:
https://github.com/bmmiller/device-type.nest/blob/master/nest.devicetype.groovy
After line 96, this is to expose the status to any SmartApp you may write:
attribute "temperatureUnit", "string"
attribute "humiditySetpoint", "number"
attribute "autoAwayStatus", "number" // New Line
Now, you'll want to take care of getting the data in the existing poll() method, currently starting at line 459.
After line 480, to update the attribute
sendEvent(name: 'humidity', value: humidity)
sendEvent(name: 'humiditySetpoint', value: humiditySetpoint, unit: Humidity)
sendEvent(name: 'thermostatFanMode', value: fanMode)
sendEvent(name: 'thermostatMode', value: temperatureType)
sendEvent(name: 'autoAwayStatus', value: data.shared.auto_away) // New Line
This will expose a numerical value for the auto_away status.
-1 = Auto Away Not Enabled
0 = Auto Away Off
1 = Auto Away On
Then, in your SmartApp you write, where you include an input of type thermostat like this:
section("Choose thermostat... ") {
input "thermostat", "capability.thermostat"
}
You will be able to access the Auto Away status by referring to
thermostat.autoAwayStatus
From anywhere in your code where you can do something like
if (thermostat.autoAwayStatus == 1) {
// Turn off everything
}
Is there a means of reconfiguring the GridCacheConfiguration at runtime for GridGain?
The end goal is to be able to add a grid cache at runtime after having started up the Grid.
final GridConfiguration gridConfiguration = new GridConfiguration();
gridConfiguration.setMarshaller(new GridOptimizedMarshaller());
Grid grid = GridGain.start(gridConfiguration);
...
<later on>
GridCacheConfiguration newCacheConfig = ...; //defines newConfig
grid.configuration().setCacheConfiguration(newCacheConfig);
grid.cache("newConfig"); // <-- throws a cache not defined error!
Adding caches usually has to do with handling different data types (generics), which GridGain addresses with GridCacheProjections, like so:
GridCacheProjection<Integer, MyType> prj = cache.projection(Integer.class, MyType.class);
You can create as many different projection from the same cache as needed. In addition to specifying data types, you can also use projections to turn cache flags on and off, or to provide filtered view for the cache with projection filters.
In CRM 2011, under Account, there is the ability to add Connection. After clicking add Connection, you can browse/search for Name which defaults to "Contact". Is there a way to switch "Contact" to "Account" by default without having to switch the select box?
Apparently is just doing this:
document.getElementById("record2id").setAttribute("defaulttype", "1");
But i do a little search and this not work for the dialog of connections, check this alternative.
This doesn't work with connections.
With connections the object type code for the lookup is set in the Mscrm.Connection.preSelectObjectType function in Microsoft Dynamics CRM\CRMWeb_static\entities\connection.js.
There is a line like
$v_2.set_defaultType($v_3);
where the object type is set. $v_3 is set depending on the chosen role.
So you need to change it to
$v_2.set_defaultType(Mscrm.EntityTypeCode.Account.toString());
But you will lose the role based lookup configuration, so you might want to modify that. Plus it is unsupported and you will need to take into account the updating behavior when installing new rollups that change the connection.js (i.e. copy newer connection.js files by hand from an updated system, and customize them again).
Here are two approaches. Both works, but the first adds the type record Icon to the loockup field even if it empy. The second doesn't do that but a little bit more risky as it depends on the internal method names.
1st method:
if (IsNull(Xrm.Page.getAttribute('record2id').getValue())) {$("#record2id")[0].DataValue = [{ "type": scrm.EntityTypeCode.SystemUser.toString() }];}
2nd Method
document.original_preSelectObjectType = Mscrm.Connection.preSelectObjectType;
Mscrm.Connection.preSelectObjectType = function (roleLookup, peerRoleLookup) {
if (IsNull(roleLookup.DataValue) && IsNull(peerRoleLookup.DataValue) && !window.event.srcElement.DataValue) {
var $v_0 = window.event.srcElement;
$v_0.defaulttype = Mscrm.EntityTypeCode.SystemUser.toString();
$v_0.DefaultViewId = "";
$v_0.Lookup(true, false, null, false);
}
else {
document.original_preSelectObjectType(roleLookup, peerRoleLookup); }}