We have a multi tenant azure ad application that is not visible in certain other tenants. Is there a tenant level setting to allow third party applications?
We run the following command from the azure cli to see if hte application is visible
az ad app show --id appID
We get the following error:
(I have xxx out the application id)
Resource 'xxxxx' does not exist or one of its queried
reference-property objects are not present. Traceback (most recent
call last): File "C:\Program Files (x86)\Microsoft
SDKs\Azure\CLI2\lib\site-packages\azure\cli\main.py", line 36, in main
cmd_result = APPLICATION.execute(args) File "C:\Program Files (x86)\Microsoft
SDKs\Azure\CLI2\lib\site-packages\azure\cli\core\application.py", line
216, in execute
result = expanded_arg.func(params) File "C:\Program Files (x86)\Microsoft
SDKs\Azure\CLI2\lib\site-packages\azure\cli\core\commands__init__.py",
line 381, in call
return self.handler(*args, **kwargs) File "C:\Program Files (x86)\Microsoft
SDKs\Azure\CLI2\lib\site-packages\azure\cli\core\commands__init__.py",
line 640, in _execute_command
raise client_exception File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\core\commands__init__.py",
line 628, in _execute_command
exception_handler(ex) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\core\util.py", line 49, in
empty_on_404
raise ex File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\core\commands__init__.py",
line 612, in _execute_command
result = op(client, **kwargs) if client else op(**kwargs) File "C:\Program Files (x86)\Microsoft
SDKs\Azure\CLI2\lib\site-packages\azure\cli\command_modules\role\custom.py",
line 455, in show_application
return client.get(object_id) File "C:\Program Files (x86)\Microsoft
SDKs\Azure\CLI2\lib\site-packages\azure\graphrbac\operations\applications_operations.py",
line 272, in get
raise models.GraphErrorException(self._deserialize, response) azure.graphrbac.models.graph_error.GraphErrorException: Resource
'xxxx' does not exist or one of its queried reference-property objects
are not present.
Users in other tenant needs to consent the permissions to the multi-tenant App. Then the Applicaition will occurs in that tenant as a sp. So you may forget to do this step:
When a user from a different tenant signs in to the application for
the first time, Azure AD asks them to consent to the permissions
requested by the application. If they consent, then a representation
of the application called a service principal is created in the user’s
tenant, and sign-in can continue.
After finishing this, you can use az ad sp list to check if the sp is in that tenant.
Also, you need to ensure your multi-tenant app is configured well before you starting to login it. For more details about How to sign in any AAD user using the multi-tenant app, please refer to this document.
Related
i have a directory (wallpaper_app/Best_Wallpapers) in storage of firebase and it have some files iwant URL's of all file
but when i try to get list_files by
image = storage.child('wallpaper_app/Best_Wallpapers/').list_files()
iam facing error
File "d:\Project\wallpaper-app-kivy\temp.py", line 33, in <module>
imageUrl = storage.list_files()
File "D:\Project\wallpaper-app-kivy\wallpaper-app\lib\site-packages\pyrebase\pyrebase.py", line 507, in list_files
return self.bucket.list_blobs()
AttributeError: 'Storage' object has no attribute 'bucket'
you have to get a Service accounts
got to (Project settings) then (Service accounts) in (Firebase Admin SDK)
scroll down to make new private key downlode file
you have to past its path of file to config dic by key of "serviceAccount"
I'm trying to deploy our app.yaml and queue.yaml using the following command:
gcloud --verbosity=debug --project PROJECT_ID app deploy app.yaml queue.yaml
I created a new service account with the roles
App Engine Deployer
App Engine Service Admin
Cloud Build Service Account
for deploying the app.yaml, which works by itself. When trying to deploy the queue.yaml, I get the following error:
DEBUG: Running [gcloud.app.deploy] with arguments: [--project: "PROJECT_ID", --verbosity: "debug", DEPLOYABLES:1: "[u'queue.yaml']"]
DEBUG: Loading runtimes experiment config from [gs://runtime-builders/experiments.yaml]
INFO: Reading [<googlecloudsdk.api_lib.storage.storage_util.ObjectReference object at 0x7fcc7dba0dd0>]
DEBUG: API endpoint: [https://appengine.googleapis.com/], API version: [v1]
Configurations to update:
descriptor: [/home/dominic/workspace/PROJECT/api/queue.yaml]
type: [task queues]
target project: [PROJECT_ID]
DEBUG: (gcloud.app.deploy) PERMISSION_DENIED: The caller does not have permission
Traceback (most recent call last):
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 983, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 807, in Run
resources = command_instance.Run(args)
File "/usr/lib/google-cloud-sdk/lib/surface/app/deploy.py", line 117, in Run
default_strategy=flex_image_build_option_default))
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/deploy_util.py", line 606, in RunDeploy
app, project, services, configs, version_id, deploy_options.promote)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/output_helpers.py", line 111, in DisplayProposedDeployment
DisplayProposedConfigDeployments(project, configs)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/app/output_helpers.py", line 134, in DisplayProposedConfigDeployments
project, 'cloudtasks.googleapis.com')
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/services/enable_api.py", line 43, in IsServiceEnabled
service = serviceusage.GetService(project_id, service_name)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/services/serviceusage.py", line 168, in GetService
exceptions.ReraiseError(e, exceptions.GetServicePermissionDeniedException)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/services/exceptions.py", line 96, in ReraiseError
core_exceptions.reraise(klass(api_lib_exceptions.HttpException(err)))
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/exceptions.py", line 146, in reraise
six.reraise(type(exc_value), exc_value, tb)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/services/serviceusage.py", line 165, in GetService
return client.services.Get(request)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/third_party/apis/serviceusage/v1/serviceusage_v1_client.py", line 297, in Get
config, request, global_params=global_params)
File "/usr/bin/../lib/google-cloud-sdk/lib/third_party/apitools/base/py/base_api.py", line 731, in _RunMethod
return self.ProcessHttpResponse(method_config, http_response, request)
File "/usr/bin/../lib/google-cloud-sdk/lib/third_party/apitools/base/py/base_api.py", line 737, in ProcessHttpResponse
self.__ProcessHttpResponse(method_config, http_response, request))
File "/usr/bin/../lib/google-cloud-sdk/lib/third_party/apitools/base/py/base_api.py", line 604, in __ProcessHttpResponse
http_response, method_config=method_config, request=request)
GetServicePermissionDeniedException: PERMISSION_DENIED: The caller does not have permission
ERROR: (gcloud.app.deploy) PERMISSION_DENIED: The caller does not have permission
I've also tried the following roles:
Cloud Tasks Admin
Cloud Tasks Queue Admin
Cloud Tasks Service Agent
I'm using the Project Editor role for now, which works but I would like to only permit the roles which are actually required.
In addition of Cloud Tasks Queue Admin role, you have to add Service Account User to allow the service account of Cloud Task to generate token on behalf the service account.
Was banging my head against the wall for awhile with this myself, it seems "intuitively" you also need "serviceusage.services.list" perms, so Service Usage Viewer role
found via this issue
https://issuetracker.google.com/issues/137078982
In a project I am using rotatingfilehandler for logs. It is configured to have size of 5M and backup count of 2. Log handler is used by different threads. The program crashes after a while with the following error:
"File "C:\Program Files (x86)\Python38-32\lib\logging\handlers.py",
line 70, in emit
self.doRollover() File "C:\Program Files (x86)\Python38-32\lib\logging\handlers.py", line 70, in emit
self.doRollover() File "C:\Program Files (x86)\Python38-32\lib\logging\handlers.py", line 171, in doRollover
self.rotate(self.baseFilename, dfn) File "C:\Program Files (x86)\Python38-32\lib\logging\handlers.py", line 171, in doRollover
self.rotate(self.baseFilename, dfn) File "C:\Program Files (x86)\Python38-32\lib\logging\handlers.py", line 111, in rotate
os.rename(source, dest) File "C:\Program Files (x86)\Python38-32\lib\logging\handlers.py", line 111, in rotate
os.rename(source, dest) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process"
I think it happens because logger try to rename the file while it is already open in another thread. Any suggestions are appreciated. Thanks.
Are they actually operating in different threads? Or are they different processes?
As the docs explain
logging to a single file from multiple processes is not supported, because there is no standard way to serialize access to a single file across multiple processes in Python
Those same docs also outline some possible solutions to this problem, such as using a multiprocessing Lock, which I won't replicate here.
I want to check text similarity using paralleldots api in app engine, but when setting the api key in app engine using.
paralleldots.set_api_key("XXXXXXXXXXXXXXXXXXXXXXXXXXX")
App engine giving Error:
with open('settings.cfg', 'w') as configfile:
File "/home/ulti72/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/python/runtime/stubs.py", line 278, in __init__
raise IOError(errno.EROFS, 'Read-only file system', filename)
IOError: [Errno 30] Read-only file system: 'settings.cfg'
INFO 2019-03-17 10:43:59,852 module.py:835] default: "GET / HTTP/1.1" 500 -
INFO 2019-03-17 10:46:47,548 client.py:777] Refreshing access_token
ERROR 2019-03-17 10:46:50,931 wsgi.py:263]
Traceback (most recent call last):
File "/home/ulti72/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/home/ulti72/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "/home/ulti72/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
File "/home/ulti72/Desktop/koda/main.py", line 26, in <module>
paralleldots.set_api_key("7PR8iwo42DGFB8qpLjpUGJPqEQHU322lqTDkgaMrX7I")
File "/home/ulti72/Desktop/koda/lib/paralleldots/config.py", line 13, in set_api_key
with open('settings.cfg', 'w') as configfile:
File "/home/ulti72/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/python/runtime/stubs.py", line 278, in __init__
raise IOError(errno.EROFS, 'Read-only file system', filename)
IOError: [Errno 30] Read-only file system: 'settings.cfg'
The paralleldots api, seems to want to save a settings.cfg file to the local filesystem in response to that call. Which is not allowed in the 1st generation standard environment and only allowed for files in the /tmp filesystem in the 2nd generation.
The local development server was designed for the 1st generation standard env and enforces the restriction with that error. It has limited support for the 2nd generation env, see Python 3.7 Local Development Server Options for new app engine apps.
Things to try:
check if specifying the location of the settings.cfg is supported and if so make it reside under /tmp. Maybe the local development server allows that or you switch to some other local development method than the development server.
check if saving the settings using an already open file handler is supported and, if so, use one obtained from Cloud Storage client library, something along these lines: How to zip or tar a static folder without writing anything to the filesystem in python?
check if set_api_key() supports some other method of persisting the API key than saving the settings to a file
check if it's possible to specify the API key for every subsequent call so you don't have to persist it using set_api_key() (maybe using a common wrapper function for convenience)
I need to gain access to Kubernetes cluster in Azure on a Windows Server 2016 machine. I did not create the cluster but I am assigned as Global Admin in the Azure Account. I already made a successful login to the azure account, but not yet to the server. I already installed kubectl CLI in the machine.
Now I need to access the cluster.
I have .kube/config, .ssh/id_rsa and .ssh/id_rsa.pub inside my C:\Users\Administrator folder. I have tried the ssh -i ~/.ssh/id_rsa kubectluser#ourDNSname and I was able to access it. So my private key is good. But, I don't want to work inside the VM. My working directory is supposed to be inside the WinServer2016 machine. I should be able to kubectl get nodes and it must return me a table of the 3 VMs.
This is what happens though (AGAIN, the ssh to VM works, I can do the kubectl commands w/o problem inside the VM)
az acs kubernetes get-credentials --resource-group=myRGroup --name=myClusterName
returns
Authentication failed. Traceback (most recent call last): File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\main.py", line 36, in main cmd_result = APPLICATION.execute(args) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\core\application.py", line 216, in execute result = expanded_arg.func(params) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\core\commands\__init__.py", line 377, in __call__ return self.handler(*args, **kwargs) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\core\commands\__init__.py", line 620, in _execute_command reraise(*sys.exc_info()) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\six.py", line 693, in reraise raise value File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\core\commands\__init__.py", line 602, in _execute_command result = op(client, **kwargs) if client else op(**kwargs) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\command_modules\acs\custom.py", line 776, in k8s_get_credentials _k8s_get_credentials_internal(name, acs_info, path, ssh_key_file) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\command_modules\acs\custom.py", line 797, in _k8s_get_credentials_internal '.kube/config', path_candidate, key_filename=ssh_key_file) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\command_modules\acs\acs_client.py", line 72, in secure_copy ssh.connect(host, username=user, pkey=pkey, sock=proxy) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\paramiko\client.py", line 416, in connect look_for_keys, gss_auth, gss_kex, gss_deleg_creds, t.gss_host, File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\paramiko\client.py", line 701, in _auth raise saved_exception File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\paramiko\client.py", line 678, in _auth self._transport.auth_publickey(username, key)) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\paramiko\transport.py", line 1447, in auth_publickey return self.auth_handler.wait_for_response(my_event) File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\paramiko\auth_handler.py", line 223, in wait_for_response raise e paramiko.ssh_exception.AuthenticationException: Authentication failed.
kubectl get nodes
returns
You must be logged in to the server
I can't use kubectl create or kubectl set image deployment because of this.
What do I need to do? What information do I need from the person and machine who/which created the cluster?
Edit:
I have .kube/config, .ssh/id_rsa and .ssh/id_rsa.pub inside my
C:\Users\Administrator folder.
The default path to an SSH key file is ~\.ssh\id_rsa, in Windows, we should specify the path, like this:
C:\Users\jason\.ssh>az acs kubernetes get-credentials --resource-group=jasonk8s --name jasonk8s --ssh-key-file C:\Users\jason\.ssh\k8s
Merged "jasontest321mgmt" as current context in C:\Users\jason\.kube\config
C:\Users\jason\.ssh>kubectl.exe get nodes
NAME STATUS ROLES AGE VERSION
k8s-agent-c99b4149-0 Ready agent 7m v1.7.7
k8s-master-c99b4149-0 Ready master 8m v1.7.7
In your scenario, please try to use this script to get credentials:
az acs kubernetes get-credentials --resource-group=myRGroup --name=myClusterName --ssh-key-file C:\Users\Administrator\.ssh\id_rsa