DBus Secret Service prompting - gnome

Hi I'm attempting to utilize the Secret Service to access secrets in Gnome Keyring.
Everything works ok if the collection I'm trying to access is already unlocked. The problem I'm running into is when the collection/secrets are locked and it needs to prompt for the user to input their password to unlock it.
According to the docs
Operations that require a prompt to complete will return a prompt object. The client application must then call the Prompt() method of the prompt object to display the prompt. Client applications can use the window-id argument to display the prompt attached to their application window.
The Prompt() method is supposed to exist within org.freedesktop.Secret.Prompt per the docs.
I'm using D-Feet to try to find that, but nothing matching that name shows up in either the Session or System bus.
Any idea if this has moved? Or if I should be doing something else to display the prompt?
Thanks!

The test I've done was using the create a new collection functionality as I found this the method that was easy to do that issued a prompt.
There are two object paths returned from CreateCollection
collection
The new collection object, or '/' if prompting is necessary.
prompt
A prompt object if prompting is necessary, or '/' if no prompt was needed.
I then used the prompt object to call the Prompt method with a random window ID. I waited for the prompt Completed signal before continuing.
A screenshot of the prompt:
From the comments I can see you are using C#, unfortunately I don't know that so I have done my tests in Python. I'm hoping there is enough similarity to help move you forward.
from pydbus import SessionBus
from gi.repository import GLib
properties = {"org.freedesktop.Secret.Collection.Label": GLib.Variant.new_string("MyCollection")}
ses_bus = SessionBus()
service_name = 'org.freedesktop.secrets'
secret_service = ses_bus.get(service_name, '/org/freedesktop/secrets')
mainloop = GLib.MainLoop()
def _received_pw(dismissed, object_path):
print("dismissed?", dismissed, object_path)
mainloop.quit()
def show_prompt(prompt_id):
prompt = ses_bus.get(service_name, prompt_id)
prompt.onCompleted = _received_pw
prompt.Prompt("random_id_for_window")
mainloop.run()
print('Prompt closed')
def add_my_collection():
result = secret_service.CreateCollection(properties, "")
print("result from CreateCollection", result)
if result[1] != '/':
show_prompt(result[1])
def remove_my_collection():
for test_collect in secret_service.Collections:
if "MyCollection" in test_collect:
this_collection = ses_bus.get(service_name, test_collect)
result = this_collection.Delete()
if result != '/':
show_prompt(result)
def main():
add_my_collection()
remove_my_collection()
if __name__ == '__main__':
main()
Which gave the output:
result from CreateCollection ('/', '/org/freedesktop/secrets/prompt/p1')
dismissed? False /org/freedesktop/secrets/collection/MyCollection
Prompt closed
I've been using busctl to monitor what has been created. For example:
$ busctl --user tree org.freedesktop.secrets
└─/org
├─/org/freedesktop
│ ├─/org/freedesktop/portal
│ │ └─/org/freedesktop/portal/desktop
│ └─/org/freedesktop/secrets
│ ├─/org/freedesktop/secrets/collection
│ │ ├─/org/freedesktop/secrets/collection/MyCollection
│ │ ├─/org/freedesktop/secrets/collection/login
│ │ │ ├─/org/freedesktop/secrets/collection/login/1
│ │ └─/org/freedesktop/secrets/collection/session
│ ├─/org/freedesktop/secrets/prompt
│ │ ├─/org/freedesktop/secrets/prompt/p1
│ │ └─/org/freedesktop/secrets/prompt/u2
│ └─/org/freedesktop/secrets/session
│ ├─/org/freedesktop/secrets/session/s1
└─/org/gnome
└─/org/gnome/keyring
└─/org/gnome/keyring/daemon

Related

Terraform - How to initialize set variable in tfvars

Background
The Terraform document clearly states variable defined in the root module can be set in tfvars file.
Type Constraints
The type constructors allow you to specify complex types such as collections:
set(<TYPE>)
Assigning Values to Root Module Variables
When variables are declared in the root module of your configuration, they can be set in a number of ways:
In variable definitions (.tfvars) files, either specified on the command line or automatically loaded.
An input variable of type set can be defined in a root module.
variables.tf
variable "roles" {
description = "IAM roles to grant to the service account"
type = set(string)
}
Question
Please advise how to initialize the set variable in tfvars? Using function is not allowed, and as far as I looked around, it looks there is no example in the Terraform documentations. Or if setting set is not supported, is it clearly documented?
terraform.tfvars
roles = toset([
"roles/cloudsql.client",
"roles/bigquery.dataEditor",
"roles/storage.admin",
"roles/pubsub.edito",
"roles/secretmanager.secretAccessor",
"roles/artifactregistry.reader"
])
Error: Function calls not allowed
│
│ on sa.auto.tfvars line 1:
│ 1: roles = toset([
│ 2: "roles/cloudsql.client",
│ 3: "roles/bigquery.dataEditor",
│ 4: "roles/storage.admin",
│ 5: "roles/pubsub.edito",
│ 6: "roles/secretmanager.secretAccessor",
│ 7: "roles/artifactregistry.reader"
│ 8: ])
You just define it as:
roles = [
"roles/cloudsql.client",
"roles/bigquery.dataEditor",
"roles/storage.admin",
"roles/pubsub.edito",
"roles/secretmanager.secretAccessor",
"roles/artifactregistry.reader"
]
TF will automatically convert it to the correct type.

Python 3.9: importlib exec_module() does not execute module

Given the following Python module layout:
app/
├── drivers
│ ├── mydriver
│ │ ├── driver.py
│ │ └── __init__.py
│ └── __init__.py
├── __init__.py
└── main.py
I am trying to dynamically import the "mydriver" module in main.py:
import os
import importlib
driver_dir = os.path.join(os.path.dirname(__file__), 'drivers')
loader_details = (
importlib.machinery.ExtensionFileLoader,
importlib.machinery.EXTENSION_SUFFIXES
)
finder = importlib.machinery.FileFinder(driver_dir, loader_details)
spec = finder.find_spec('mydriver')
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
# The following line produces AttributeError: module 'mydriver' has no attribute 'driver'
driver = getattr(module, 'driver')
drivers/mydriver/__init__.py contains the following:
from . import driver
print("TEST")
So the result is the Attribute error as written in the inline comment. The "print()" from __init__.py is also not being executed.
Any hints why the module is apparently not being evaluated?
While I haven't found a root cause, I did find a (not so pretty) workaround. For some reason, the module can not be executed if it was found using the FileFinder. It does however execute if I do the following:
sys.path.insert(0, driver_dir)
spec = importlib.util.find_spec('mydriver')
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
So in short, I don't know what Python wants from a Finder to also execute modules, not just files. Well, at least I have working code for now...

How to create string output with splat operator in terraform

I am creating several count - based ELBs with terraform.
e.g.
resource "aws_elb" "webserver_example" {
count = var.create_webserver
name = var.name
subnets = data.aws_subnet_ids.default.ids
security_groups = [aws_security_group.elb[count.index].id]
}
I therefore want to be able to get as outputs their http endpoints.
These outputs I assume shoul be strings, and their should somehow incorporate each elb's dns name.
However the following approach using splat, does not work
output "url" {
value = "http://${aws_elb.webserver_example.*.dns_name}:${var.elb_port}"
}
│ Error: Invalid template interpolation value
│
│ on outputs.tf line 2, in output "url":
│ 2: value = "http://${aws_elb.webserver_example.*.dns_name}:${var.elb_port}"
│ ├────────────────
│ │ aws_elb.webserver_example is empty tuple
│
│ Cannot include the given value in a string template: string required.
╵
Is there a way to print multiple count-based strings?
From what I was able to infer from just the code you provided, your var.create_webserver will have different count values (e.g. >= 0). The answer to your specific question is in this code block:
output "url" {
value = [
for dns_name in aws_elb.webserver_example.*.dns_name :
format("http://%s:%s", dns_name, var.elb_port)
]
}
However, be sure you introduce some way to make the names of your Security Groups and ELBs different, because that will be your next error. For example, name = "${var.name}-${count.index}".
Once you get to that point, you will have output that looks like this:
Outputs:
url = [
"http://so-0-2118247212.us-east-1.elb.amazonaws.com:443",
"http://so-1-1137510015.us-east-1.elb.amazonaws.com:443",
]

what's causing terraform error: Call to function "formatlist" failed: error on format iteration 0: unsupported value for "%s" at 5: string required

In my terraform code I have the following locals
locals {
merged_acl_contributors = concat(var.workspace.acl.contributors, azurerm_synapse_workspace.workspace.identity)
contributors = formatlist("user:%s:rwx", local.merged_acl_contributors)
}
var.workspace.acl.contributors does not have a value (just has []). When I try to deploy this I get:
│ Error: Error in function call
│
│ on modules/synapse_v2/main.tf line 10, in locals:
│ 10: contributors = formatlist("user:%s:rwx", local.merged_acl_contributors)
│ ├────────────────
│ │ local.merged_acl_contributors is tuple with 1 element
│
│ Call to function "formatlist" failed: error on format iteration 0:
│ unsupported value for "%s" at 5: string required.
identity in azurerm_synapse_workspace.workspace is a block with multiple attributes. You have to choose what you want. For example:
merged_acl_contributors = concat(var.workspace.acl.contributors, [azurerm_synapse_workspace.workspace.identity.principal_id])
Looking at local.merged_acl_contributors was the answer. The value was wrong. Instead of azurerm_synapse_workspace.identity, it needed to be azurerm_synapse_workspace.managedidentity.

Auto importing modules in the folder and allowing them to be selected by name from a variable string

How it's organized
Currently, this is my folder structure:
├ Websites
│ ├ Prototypes
│ │ ├ __init__.py
│ │ ├ Website.py
│ │ └ XML.py
│ ├ __init__.py
│ └ FunHouse.py
└ scrape_sites.py
This is FunHouse.py:
from Websites.Prototypes.Website import Website, Search
class FunHouse(Website):
def doStuff():
# does stuff
This is my __init__.py in the Websites folder:
from Websites.Prototypes.Website import Website
from Websites.FunHouse import FunHouse
def choose_site(website):
if website == "FunHouse":
return FunHouse()
else:
return Website()
And in my scrape_sites.py file is the following:
import Websites
# Some code that loads a text file and sets website_string to "FunHouse"
website = Websites.choose_site(website_string)
website.doStuff()
My Question
If I want to add a website, I have to edit __init__.py. Is there any way to make it so that I don't have to edit the __init__.py file whenever I add a new website? So if I create Google.py, I can just throw it into the Websites folder and it will be available to call?
You can add the __all__ variable to your __init__.py file once.
__all__ = ["Google", "Yahoo", "Bing"]
Now, each of these modules will be available to you in your code to use.
I figured it out. I don't even need the __init__.py files:
def choose_site(website_str):
mod = __import__('Websites.' + website_str, fromlist=[website_str])
obj = getattr(mod, website_str)
return obj()
This relies on the class name being the same as the Python filename, because if you pass in "Google" in the argument, it will import Websites.Google.Google
If you're looking at this and wanted to have more flexibility, this will work:
def choose_class(module_name, filename, class_name):
"""
:param module_name: The folder
:param filename: The .py file
:param class_name: The class in the .py file
:return: The selected class
"""
mod = __import__(module_name+ '.' + filename, fromlist=[filename])
obj = getattr(mod, class_name)
return obj()

Resources