Get Google App Engine LocationId at runtime - python-3.x

The new Cloud Tasks python libraries require location as task creation parameter. I can always look up the location and hardcode it, but everything else, including the project name, is available through environment variables. Is there a way to get the locationId (eg. us-central1) from python3 standard environment?

The REST API (and presumably the python client library) for AppEngine can return the location id if you know the application name:
https://cloud.google.com/appengine/docs/admin-api/reference/rest/v1/apps/get
The Application object that is returned has a "locationId" key.
However, note that the cloud tasks documentation calls out 2 exceptions to verbatim using this identifier: europe-west and us-central need to be passed to tasks as europe-west1 and us-central1 respectively.

It's possible to get this information from the Metadata server. Accessing http://metadata.google.internal/computeMetadata/v1/instance/region from your app will return a string of the form 'projects/[numeric-project-id]/regions/[locationId]'.

Related

Node typescript library environment specific configuration

I am new to node and typescript. I am working on developing a node library that reaches out to another rest API to get and post data. This library is consumed by a/any UI application to send and receive data from the API service. Now my question is, how do I maintain environment specific configuration within the library? Like for ex:
Consumer calls GET /user
user end point on the consumer side calls a method in the library to get data
But if the consumer is calling the user end point in test environment I want the library to hit the following API Url
for test http://api.test.userinformation.company.com/user
for beta http://api.beta.userinformation.company.com/user
As far as I understand the library is just a reference and is running within the consumer application. Library can for sure get the environment from the consumer, but I do not want the consumer having to specify the full URL that needs to be hit, since that would be the responsibility of the library to figure out.
Note: URL is not the only problem, I can solve that with environment switch within the library, I have some client secrets based on environments which I can neither store in the code nor checkin to source control.
Additional Information
(as per jfriend00's request in comments)
My library has a LibExecutionEngine class and one method in it, which is the entry point of the library:
export class LibExecutionEngine implements ExecutionEngine {
constructor(private environment: Environments, private trailLoader:
TrailLoader) {}
async GetUserInfo(
userId: string,
userGroupVersion: string
): Promise<UserInfo> {
return this.userLoader.loadUserInfo(userId, userGroupVersion)
}
}
export interface ExecutionEngine {
GetUserInfo(userId: string, userGroupVersion: string): Promise<UserInfo>
}
The consumer starts to use the library by creating an instance of the LibraryExecution then calling the getuserinfo for example. As you see the constructor for the class accepts an environment. Once I have the environment in the library, I need to somehow load the values for keys API Url, APIClientId and APIClientSecret from within the constructor. I know of two ways to do this:
Option 1
I could do something like this._configLoader.SetConfigVariables(environment) where configLoader.ts is a class that loads the specific configuration values from files({environment}.json), but this would mean I maintain the above mentioned URL variables and the respective clientid, clientsecret to be able to hit the URL in a json file, which I should not be checking in to source control.
Option 2
I could use dotenv npm package, and create one .env file where I define the three keys, and then the values are stored in the deployment configuration which works perfectly for an independently deployable application, but this is a library and doesn't run by itself in any environment.
Option 3
Accept a configuration object from the consumer, which means that the consumer of the library provides the URL, clientId, and clientSecret based on the environment for the library to access, but why should the responsibility of maintaining the necessary variables for library be put on the consumer?
Please suggest on how best to implement this.
So, I think I got some clarity. Lets call my Library L, and consuming app C1 and the API that the library makes a call out to get user info as A. All are internal applications in our org and have a OAuth setup to be able to communicate, our infosec team provides those clientids and secrets to individual applications, so I think my clarity here is: C1 would request their own clientid and clientsecret to hit A's URL, C1 would then pass in the three config values to the library, which the library uses to communicate with A. Same applies for some C2 in the future.
Which would mean that L somehow needs to accept a full configuration object with all required config values from its consumers C1, C2 etc.
Yes, that sounds like the proper approach. The library is just some code doing what it's told. It's the client in this case that had to fetch the clientid and clientsecret from the infosec team and maintain them and keep them safe and the client also has the URL that goes with them. So, the client passes all this into your library, ideally just once per instance and you then keep it in your instance data for the duration of that instance

Rails 6+: order in which Rails reads SECRET_KEY_BASE (env var versus credentials.yml.enc)

For context, I'm in the process of updating a Rails app to 5.2 and then to 6.0.
I'm updating my credentials to use the config/credentials.yml.enc and config/master.key defaults with Rails 5.2+ apps.
The Rails docs state:
In test and development applications get a secret_key_base derived from the app name. Other environments must use a random key present in config/credentials.yml.enc
(emphasis added)
This leads me to think that in production the SECRET_KEY_BASE value is required to be read from Rails.application.credentials.secret_key_base via config/credentials.yml.enc. In test and development environments, the secret_base_key is essentially "irrelevant", since it's calculated from the app name.
However, when I was looking at the Rails source code, it reads:
def key
read_env_key || read_key_file || handle_missing_key
end
That seems to say the order of reading values is:
ENV["SECRET_BASE_KEY"]
Rails.application.credentials.secret_base_key
Raise error
I use Heroku for my hosting, and have a ENV["SECRET_BASE_KEY"] env variable that stores this secret value.
Questions
If I have both ENV["SECRET_BASE_KEY"] and Rails.application.credentials.secret_base_key set, which one takes priority?
Is using the ENV var going to be deprecated at some point?
I have lots of environment-specific ENV variables because I don't want to use my production accounts in development for AWS S3 buckets, stripe accounts, etc. The flat-file format of credentials.yml.enc seems to assume developers only need to access these 3rd-party APIs in production. Is there an accepted format to handle environment-specific credentials yet in Rails?
I read through the comment threads on DHH's original PR as well as a linked PR that says it implements environment-specific credentials, but the docs don't mention this implementation so I'm not certain if it's the standard or if it's going to go away sometime soon.

What is suggested method to get service versions

What is the best way to get list of service versions in google app engine in flex env? (from service instance in Python 3). I want to authenticate using service account json keys file. I need to find currently default version (with most of traffic).
Is there any lib I can use like googleapiclient.discovery, or google.appengine.api.modules? Or I should build it from scratches and request REST api on apps.services.versions.list using oauth? I couldn't not find any information in google docs..
https://cloud.google.com/appengine/docs/standard/python3/python-differences#cloud_client_libraries
Finally I was able to solve it. Simple things on GAE became big problems..
SOLUTION:
I have path to service_account.json set in GOOGLE_APPLICATION_CREDENTIALS env variable. Then you can use google.auth.default
from googleapiclient.discovery import build
import google.auth
creds, project = google.auth.default(scopes=['https://www.googleapis.com/auth/cloud-platform.read-only'])
service = build('appengine', 'v1', credentials=creds, cache_discovery=False)
data = service.apps().services().get(appsId=APPLICATION_ID, servicesId=SERVICE_ID).execute()
print data['split']['allocations']
Return value is allocations dictionary with versions as keys and traffic percents in values.
All the best!
You can use Google's Python Client Library to interact with the Google App Engine Admin API, in order to get the list of a GAE service versions.
Once you have google-api-python-client installed, you might want to use the list method to list all services in your application:
list(appsId, pageSize=None, pageToken=None, x__xgafv=None)
The arguments of the method should include the following:
appsId: string, Part of `name`. Name of the resource requested. Example: apps/myapp. (required)
pageSize: integer, Maximum results to return per page.
pageToken: string, Continuation token for fetching the next page of results.
x__xgafv: string, V1 error format. Allowed values: v1 error format, v2 error format
You can find more information on this method in the link mentioned above.

Introduce new data source in Terraform

I am new to Terraform and have been trying to understand the constructs of the same. Let's say i have a service which exposes REST API's and i want to call those REST API's as part of my terraform script, what are the steps i need to take ?
My understanding is that i need to write a custom provider but i am unable to connect the dot's on how to add new data source type for the new provider.
Also, assuming that we do have the required provider, whats the protocol that would be used for communicating with my service ? Is it HTTP/s ?
One more point to note is that my service currently is used for configuring storage in the backend.
Recent versions of terraform ( > 0.9 I believe) support external data sources. You don't have to create a custom provider. You can call any arbitrary shell or python script that return values that you can use as data.
data "external" "example" {
program = ["python", "${path.module}/example-data-source.py"]
query = {
# arbitrary map from strings to strings, passed
# to the external program as the data query.
id = "abc123"
}
}
In your case you could use a simple curl in a bash script to call your endpoint and return data to terraform as a map of strings.
Do note the warnings a the top of that page.
This is considerably more difficult then it appears; it is impossible to debug the interaction between what terraform is sending to my script and what the script is expecting. It just fails to parse the arguments and refuses to provide me any feedback as to what is getting into the program

Match a Deployment ID in Windows Azure

I have several different services running the same code base as windows azure worker roles.
I'm trying to test and see if the currently executing code is running in a specific instance. If I call to this in the management API:
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(
new Uri("https://management.core.windows.net/" + subscriptionId + "/services/hostedservices/<<servicename>>/deploymentslots/production?embed-detail=true"));
I get a response like this:
<Deployment xmlns="http://schemas.microsoft.com/windowsazure" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<Name>c8bd3b12f1bc4e0db9d8c1d59e97e48b</Name>
<DeploymentSlot>Production</DeploymentSlot>
<PrivateID>d1ea61e367e84aedb68de97eded3e896</PrivateID>
<Status>Running</Status>
<Label>SXRlbVVwZGF0ZXIgLSAzLzEzLzIwMTMgMTA6NDQ6MTUgQU0=</Label>
<Url>http://itemupdater3.cloudapp.net/</Url>
<RoleInstanceList>
<RoleInstance>
<RoleName>UpdateItems</RoleName>
<InstanceName>UpdateItems_IN_0</InstanceName>
<InstanceStatus>Ready</InstanceStatus>
</RoleInstance>
</RoleInstanceList>
<UpgradeDomainCount>1</UpgradeDomainCount>
<RoleList>
<Role>
<RoleName>UpdateItems</RoleName>
<OsVersion>WA-GUEST-OS-1.22_201302-02</OsVersion>
</Role>
</RoleList>
</Deployment>
I'm trying to test and see if the currently executing code has the same ID as this response.
If I compare:
xml["Deployment"]["Name"].InnerText;
To
RoleEnvironment.CurrentRoleInstance.Role.Instances[0].Id;
It never matches. How do I match something from the C# to the ID returned from the API?
Thanks!
You're trying to compare the name of the deployment (typically a single guid-like string, unique every time you redeploy) to the name of the instance (follows a pattern of RoleName_IN_xxx). They will never match.
I'm not 100% sure what you're trying to do, but the call to Service Management API will never give you information on your current instance - because it does not know where you run from; you can even call the API from non-Azure resources. It will simply give you data about the whole subscription.
RoleEnvironment.CurrentRoleInstance.Id will provide you with the ID of the current instance.
kevin, use the RoleEnvironment.DeploymentId instead of the RoleEnvironment.CurrentRoleInstance. This will allow you to compare what is currently running with what you get from the service management API.

Resources