how to import notes id file in mail database using lotusscript - xpages

Is there a way to programmatically import a user's notes id into their mail database using lotusscript? I'm trying to automate the manual process of Notes ID management for secure mail feature in IBM Domino Security Preferences.
Thanks for your feedback!

Usually the ID file is on the local user drive. So you cannot access it centrally. If it is on a network drive, e.g. within the user sub-directory, then of course you'll be able to get and process it.
So, two scenarios:
On network drive:
- go over all drives, search the ids and attach it to the corresponding mailbox. However, it might be complicated to determine which network drive belong to which Domino user. Also one user might have multiple id files. This could become very complicated!
On local drive:
- you might want to perform the actions quite easy by running a LotusScript by the user, e.g. by sending a button that the user has to press. When the user is doing it, it is clear which id file is currently in use and the code will have access to it (because the user is currently using it). However, this solution relies on the user ability to press the button.

Related

Best practice to share google drive API credentials for being used by a script?

In a nutshell: Shall I share oauth2-credentials in our source code with the scope of full write-access to a google drive from a dedicated single-purpose google-account?
so, I've written a python script which saves some data into either a pre-existing google-sheets file or creates a new google-sheets file into a given google drive folder (both of which are publicly editable for sharing purpose between teams).
For this I followed the steps and tutorials outlined by google and other sources whereafter I have obtained oauth2-credentials needed for my script to be authenticated with the google drive and google sheets API.
Those credentials are derived from a single-purpose google-account which I have created for this script.
Now I would like to share this script with other team-members but am unsure about how to proceed regarding the credentials; either:
A.)
I would incorporate the suggested google workflow which would let the
user of the script authenticate him/herself, i.e. the user starts the
script, then is directed to the google-authentication weblogin,
authenticates the script, and then the script would save and use those
credentials of the user for writing data into a public google-sheets
file (not necessarily a private one owned by the user).
This has the downsides that:
the user would trust my script with credentials which could enable it
to read/write all of the data of the user's drive account. While I do
not mean any harm of course, it still seems rather too much to ask
and be responsible for
it breaks the intended straight-forward workflow of my script.
is not necessary at all technically, because the script shall only
write into public sheets-files / drive folders; so why should it need
write-access to all the user's drive files?.
B.)
hardcode the credentials of our single-purpose account into the
script; which has the only downside that when the script's source code
would be shared, that anyone could obtain those credentials. But these
credentials would only enable an attacker to write/read data into the
account's google drive, but not take control over the whole account
itself due to the limitations of the scope of the oauth2 credentials
(I've used the "https://www.googleapis.com/auth/drive" scope).
Additionally, as said before, we would only use the script to
read/write data into public sheets-files which are owned by real
google-accounts, so never would we use the drive of the single-purpose
account and thus no attacker could destroy our data.
Thus, I am rather opting for option B, but I can't help the anxiety which comes from hardcoding publicly readable credentials...
What would you suggest?
We decided for option A: users need to create client_secret.json and credentials.json files themselves. It is unfortunately not the most straight forward, but the most secure one. Sharing credentials in a public repo is just a big no-go, no matter the details.
Also for the sake of completeness: another alternative would be to have our application running on a server where we could save the client_secret, so then a user would just be presented with a browser-popup where he/she would authorize our service.
But we don't follow that option either since the script shall only contain the core logic and it shall be a base for other to develop upon.
That's our motivation behind the decision.

Linux Pam-ldap authentication with multiple bases

I'm managing a Linux CentOS system that works as a fileserver (and more) - accessed through SSH.
For the purposes of authentication, we are using pam-ldap with the company ldap-server. User creation and group membership is managed on the linux system.
When a user logs in, the authentication will be handled by pam-ldap.
Currently, we have configured pam-ldap to search only in the country specific part of the ldap-server, when looking up a user.
We have then handled anyone outside the country by creating a local user account for them.
However, we have seen an increasing number of out-of-country colleagues needing access to the server.
The problem with using the local users is that they need special handling to enforce password strength and change rules that we get automatically with the ldap authentication.
Today, we use an ldap base similar to this
c=us,ou=auth,o=company.tld.
For out-of-country colleagues, the base would need to be slightly different, e.g.
c=uk,ou=auth,o=company.tld
Unfortunately, we cannot simply remove the country component of the base, because the logins we use today are only unique within a country.
For each login, we know the proper base to use, but it is not clear to me, how we would (automatically) feed this information into the authentication process.
Can this be done?
Thanks

Spotfire - Batch of user password

We have a high amount of user and I need to update the password and the current process is going to Administration manager, then search for the user and change the password, so I want to know if there is a way to do it on a batch using a script that call an external file to get the password and user that I need to change I mean like having a list and then export it to spotfire
unfortunately I can't find a way to batch update users in Spotfire Server. if you needed to import a bunch of new users, you could use the command line interface, but it doesn't look like the import-users command will "overwrite" existing users (it should report an error when trying to add a new user who already exists).
I believe the password field in the Spotfire database is Base64 encoded, so you could theoretically run a database update query but this is really dangerous and may wreck your database. if you try anything like this, be sure to work in a test environment and be doubly sure that you have backups!
my recommendation is to use something like LDAP. the Spotfire database is not really designed to administer large number of users, or at least it doesn't seem that way from an administrator's perspective.

Statistics usage of a database

Is there a way to monitor statistics on usage of documents within a database?
I have a lotus notes database hosted on a local server. I know I can get some info from 'User Detail...' in Info tab of Database property (right click on the database from domino designer), which basically shows me which user accessed database and which CRUD action was performed, but I was looking for something more in depth i.e. which document in particular is read the most and by who.
Since this is StackOverflow, not SuperUser or ServerFault, I'm going to treat this as a programming question. (On those other sites, they would tell you that tracking actions at the document level is not built into Notes and Domino's functionality, but there are some 3rd party add-on products that can do it for you.)
You can implement tracking features down to the document level in Notes and Domino using the Extension Manager API portion of the Notes C API. There is also a free package on the OpenNTF.org web site, called TriggerHappy, which provides a framework for using the Extension Manager features to call Java agents when events that you want to track occur. This can make it significantly easier to accomplish what you want, but it will not scale as well for large user bases.
You should also bear in mind that since Notes and Domino are designed for use in a distributed environment in which users can do their work in local replica databases, a tracking mechanism that is based on an Extension Manager plugin running on the server may not see changes at the moment that users make them. Instead, it might see them when those changes replicate from the user's computer to the server -- and replication does not guarantee that order is preserved, so the server might see some things happen in a different order than what the user actually did.
Have a look at the activity trends, see notes help.
If you need more details, you have to implement it by yourself.

What steps are there to prevent someone inside a company to alter user data (e.g. Facebook, Google, etc.)?

I've always wonder what security mechanisms are there to prevent an employee (dba, developer, manager, etc.) from modifying users' data. Let say a user has a Facebook account. Knowing who database works, I know that at least two employees in that company would have root access to it. So my question is what if such employee decides to alter someone's profile, insert bad or misleading comments, etc.?
Any input is appreciated. Thanks.
If a person has full write access to a database, there is nothing preventing them from writing to that database. A user who has unrestricted access to Facebook's database engine has nothing other than company policy to prevent them from altering that data.
Company policy and personal honor are usually good enough. In the end, though, there's always that risk; a Google employee was fired in July for reading users' private account data. In short, the people who write software for a system can make that system do whatever they like, and there is absolutely no way to prevent this; people who can read a source of data can read that source of data, and people who can edit it can edit it. There is no theoretical way to prevent this from being the case.
In short, all that can be done is to have more than one person watching the database, and fire people who try to damage it. As a user, all you can do is trust the company that controls the data.
This is a user access control problem. You should limit who has DBA access. You should limit what code developers have access to, such as only the projects that they need to do their job.
An after thought is to keep backups and logs. If someone does change a record in the database you should have a system in place to identify and fix the problem.

Resources