How to set a `User cap` for a particular domain in Gitlab - gitlab

Original question:
I want to limit the number of users from a particular domain that can register into my Gitlab instance. I noticed that I could set a "user cap", but it wasn't specific to a domain.
For example:
I want to limit the number of users registered from these domains. 20 users from testdomain1.com and 30 users from testdomain2.com are allowed to sign up. So, if there are already 20 users registered sucessfully from testdomain1.com, new user from testdomain1.com will not be allowed to sign up.
What should I do for it?
2021.11.18 Edited:
I added a validate to the User model:
# gitlab/app/models/user.rb
class User < ApplicationRecord
# ...
validate :email_domain, :ensure_user_email_count
# ...
def email_domain
email_domain = /\#.*?$/.match(email)[0]
email_domain
end
def ensure_user_email_count
# select count(*) from users where email like '%#test.com';
if User.where("email LIKE ?", "%#{email_domain}" ).count >= 30
errors.add(email_domain, _('already has 30 registered email.'))
end
end
end
This validate can set "user cap = 30" for each domain but it's still not able to set a "User cap" for a particular domain.
Since the related issue post did not get any response yet. I'm tring to implement it by myself. And it seems like that I need to extend the UI of the Admin Settings page and add some related tables to database to set different "user cap" for different email domain.

The GitLab user cap seems to be per GitLab instance.
So if both your domains are reference the same GitLab instance, you would have only one user cap possible.
But if each of your domain redirects to one autonomous GitLab instance (per domain), then you should be able to set user cap per domain.
The OP Ann Lin has created the issue 345557 to follow that feature request.
TRhe OP reports:
A particular table is needed to store the caps.
But I don’t have enough time now to modify the UI so I found a simple way to do this:
The Allowed domains for sign-ups which called domain_allowlist in database is a text:
gitlabhq_production=# \d application_settings
...
domain_allowlist | text | | |
...
gitlabhq_production=# select domain_allowlist from >application_settings;
domain_allowlist
-------------------
--- +
- testdomain1.com+
- testdomain2.com+
(1 row)
I can modify the testdomain1.com to testdomain1.com#30 to store the user cap and use Regex to get the number 30.
I will modify the UI and add the database table later. And I’ll create a pull request on Gitlab when I’m done.

Related

KQL Query for Azure Sentinel

Need assistance in getting the summary of user domains from sentinel signinglogs.
SigninLogs
| where AppId == "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx"
| extend UserDomains = split(UserPrincipalName,'#')[1]
| summarize TotalAttempts = count(), Failed=countif(ResultType !=0), Succeded=countif(ResultType ==0), LastAttempt = max(TimeGenerated), FirstAttempt = min(TimeGenerated), CountofUniqueID = dcount(UserPrincipalName), DomainCount = dcount(tostring(UserDomains))
The result has most of the information I need. However, I need little bit more clarity.
The objective of this report is to understand how may external users are consuming this applications and from which all domains they are accessing. So I need to make a summary based on internal users and external users.
How can I get the the count of all signed in users who are having a domain name ending with *mydomain.com (covering the root domain and child domains)
How can I get the the count of user domains specific to external users, ie the unique user domains - internal user domains (anything ending with *mydomain.com)
Is there a way to concatenate all the external domains with ";" as a delimiter ? In PowerShell, we use -join ";". Anything similar for KQL?
Appreciate your help on this.

Destroy a session in Load Impact

I have created a user scenario in Load Impact to simulate a couple of hundred users in our web store.
The problem is that I can't seem to simulate the users in our Azure Queue.
The queue is only increasing with +1 users and not the hundreds of users as I want :)
I have created a random correlation id, but it seems like the session is still there.
Is there a way to destroy the session so when the script is looping a new session is created?
I found a LUA reference that says destroy:session but it wont work for me.
function rnd()
return math.random(0000, 9999)
end
{"POST", "http://STORE.////",
headers={["Content-Type"]="application/json;charset=UTF-8"},
data="{\"ChoosenPhoneModelId\":0,\"PricePlanId\":\"phone\",\"CorrelationId\":\"e97bdaf6-ed61-4fb3-".. rnd().."-d3bb09789feb\",\"ChoosenPhoneColor\":{\"Color\":1,\"Code\":\"#d0d0d4\",\"Name\":\"Silver\",\"DeliveryTime\":\"1-2 veckor\",\"$$hashKey\":\"005\"},\"ChoosenAmortization\":{\"AmortizationLength\":24,\"Price\":312,\"$$hashKey\":\"00H\"},\"ChoosenPriceplan\":{\"IsPostpaid\":true,\"IsStudent\":false,\"IsSenior\":false,\"Title\":\"Fast \",\"Description\":\"Hello.\",\"MonthlyAmount\":149,\"AvailiableDataPackages\":null,\"SubscriptionBinding\":1,\"$$hashKey\":\"00M\"},\"ChoosenDataPackage\":{\"Description\":\"20
GB\",\"PricePerMountInKr\":149,\"DataAmountInGb\":20,\"$$hashKey\":\"00U\"}}",
auto_decompress=true}
})
Any tips on how to.
Thanks in advance.
The correlation id isn't a random number. It's set by your server in a cookie. Get and use like this:
local response = http.request_batch({
{"GET", "http://store.///step1", auto_decompress=true},
})
-- extract correlation Id
local strCorrelationId = response[1].cookies['corrIdCookie']
{"POST", "http://STORE.////",
headers={["Content-Type"]="application/json;charset=UTF-8"},
data="{\"ChoosenPhoneModelId\":0,\"PricePlanId\":\"phone\",\"CorrelationId\":\"".. strCorrelationId .. "",\"ChoosenPhoneColor\":{\"Color\":1,\"Code\":\"#d0d0d4\",\"Name\":\"Silver\",\"DeliveryTime\":\"1-2 veckor\",\"$$hashKey\":\"005\"},\"ChoosenAmortization\":{\"AmortizationLength\":24,\"Price\":312,\"$$hashKey\":\"00H\"},\"ChoosenPriceplan\":{\"IsPostpaid\":true,\"IsStudent\":false,\"IsSenior\":false,\"Title\":\"Fast \",\"Description\":\"Hello.\",\"MonthlyAmount\":149,\"AvailiableDataPackages\":null,\"SubscriptionBinding\":1,\"$$hashKey\":\"00M\"},\"ChoosenDataPackage\":{\"Description\":\"20
GB\",\"PricePerMountInKr\":149,\"DataAmountInGb\":20,\"$$hashKey\":\"00U\"}}",
auto_decompress=true}
})
That is what makes your user unique. If you set CorrelationId to just any random number your server will simply not accept the session in your queue.
Once it's unique and correct your server will accept the POST properly.

Postfix and save to sent mail dir

I know this might be a dummy question or a question that comes from lack of knowledge, but I hope someone can still answer it. I did try to read a lot of Postfix documentation but found no answer to this. I don't even know if it's a Postfix specific or mail servers general question.
So I have a mail server, just a clean Postfix install that delivers email.
I've defined my users and connected with IMAP and SMTP using Thunderbird.
When I went to Thunderbird account settings and disabled "place a copy", Postfix did not put a copy of the sent message in the user .Sent folder.
However, I've also connected my Gmail, Hotmail or Yahoo mail and disabled the "place a copy" and still have a copy in the sent items folder.
So in this case there are 2 options:
Something is wrong with my Postfix configuration
Gmail, Hotmail, Yahoo put a copy in their sent folder as a different process on the server side
Just for the record, having searched around for a how to, and not finding one, I am posting it here:
The only (easy) way I've found to save sent emails is the sender_bcc solution (with it's attendant faults):
I am using postfix / dovecot / sieve / mysql virtual boxes
In /etc/postfix/main.cf add:
sender_bcc_maps = mysql:/etc/postfix/mysql-virtual-bcc-maps.cf
Create file /etc/postfix/mysql-virtual-bcc-maps.cf:
user = (database user)
password = (database password)
hosts = 127.0.0.1
dbname = (database databasename)
query = SELECT CONCAT_WS('',LEFT('%s', LOCATE('#', '%s')-1),'+sent#',SUBSTRING('%s', LOCATE('#', '%s')+1)) AS destination FROM virtual_users WHERE email='%s' AND autosent=1
You'll note in my query, I've added a (tinyint default 0) column to my virtual_users table so I can turn on/off this automatic sent items feature per user. This query takes the sender email address that postfix gives it, splits it in half at the # sign, and adds +sent to the address so it looks like sender+sent#domain.tld. This allows sieve in the next step to pick it up and drop it straight to sent items.
In /etc/dovecot/sieve/default.sieve add:
require ["fileinto", "mailbox", "envelope", "subaddress","imap4flags"];
if envelope :detail "to" "sent" {
addflag "\\Seen";
fileinto :create "Sent";
stop;
}
Also helpful to modify /etc/dovecot/conf.d/15-mailboxes.conf and add the auto subscribe to sent (and junk and trash and others for that matter):
mailbox Sent {
special_use = \Sent
auto = subscribe
}
I think that is all (I'm posting this the next day after doing it, so I think I got it all...)
Postfix itself does not place copies of sent messages anywhere; it receives messages and delivers them to the recipient. Saving sent messages to your own mailbox is the responsibility of your user agent (Thunderbird, in your case).
It's important to understand that Postfix (and other traditional Unix SMTP servers) don't have a "user" concept. Yes, if so configured it's possible to authenticate by supplying a username and a password, but Postfix doesn't use this identity information.
That said, it's not impossible to configure Postfix to do what you expected – sender_bcc_maps can be used to add a recipient to messages sent by you, and by adding yourself and using a filter in your mail client (or mail delivery agent like procmail) you can make sure that messages sent by you end up in the Sent folder.
I am running a Installation with automatic copies created by sender_bcc_maps. It's working fine. You have to check the sender, otherwise everyone can create sent mails in foreign sent folders.
I have solved it with two virtual domains. One for the user and one for the copy.
But there is a big problem with sender_bcc_maps. All bcc senders will be deleted in the sent copy. You cannot see anymore, who got a blind copy of this mail.
As 'ego2dot0' said above, you don't need any MDA filters (sieve etc.) to do this. It can be done using Postfix alone, although it took me a while to figure out how to do it.
You have to use sender_bcc_maps AND virtual_mailbox_maps features together.
You have to use a virtual domain dedicated specially for copies to self. If your actual domain is "your.domain.tld", you can use eg. subdomain "copyself.your.domain.tld". This subdomain does not have to actually exist, ie. be defined in the DNS (moreover, it's better that it isn't defined, so nobody accidentally sends mail to it from outside). It is a purely virtual domain that is recognized only by Postfix.
1) Configure sender_bcc_maps to BCC mail coming from user#your.domain.tld to user#copyself.your.domain.tld. You can do it for only a few selected users using a regular "hash" type map, or you can do it for all users at once using PCRE type map and regular expressions.
2) You have to define your virtual domain in virtual_mailbox_domains, like this:
virtual_mailbox_domains=copyself.your.domain.tld
3) Configure virtual_mailbox_maps so that the destination mailbox for address "user#copyself.your.domain.tld" is the actual "Sent" mailbox of the user "user". For example (assumed that you are using regular system users and Maildir format - like in my case) the path to "Sent" mailbox for user "user" will be "/home/user/Maildir/.Sent". So, you can define common part of the path as virtual_mailbox_base, eg.
virtual_mailbox_base=/home
and then in the virtual mailbox map enter the rest of the path like this:
user#copyself.your.domain.tld user/Maildir/.Sent/
(the trailing / is important to indicate the Maildir format).
Again, you can use PCRE type map to do this for all users.
4) To properly save mail to the mailbox, Postfix need to also know the proper UID and GID for the particular user, so you have to use virtual_uid_maps and virtual_gid_maps parameters as well. If you are using virtual users, it's probably enough to define "static" type maps specifying a single UID and GID of the system user that owns all the virtual mailboxes. However, if you are using system users like me, you need the proper actual UID and GID for any user. If you have only a few users, you can use a regular "hash" type map, with entries like these:
user#copyself.your.domain.tld 2001
or you can try to setup a pipeline with "pipemap" map type, that uses some PCRE maps and "unix:passwd.byname" map to obtain the UIDs and GIDs for all users (I haven't done this part, as my Postfix installation is compiled without "pipemap" type support).
So to sum everything up, use something like this:
In /etc/postfix/main.cf file, add the following lines:
sender_bcc_maps=hash:/etc/postfix/sender_bcc
virtual_mailbox_domains=copyself.your.domain.tld
virtual_mailbox_base=/home
virtual_mailbox_maps=hash:/etc/postfix/copyself
virtual_uid_maps=hash:/etc/postfix/copyself_uids
virtual_gid_maps=hash:/etc/postfix/copyself_gids
/etc/postfix/sender_bcc contains a bunch of lines like:
user#your.domain.tld user#copyself.your.domain.tld
/etc/postfix/copyself contains - respectively - lines like:
user#copyself.your.domain.tld user/Maildir/.Sent/
/etc/postfix/copyself_uids and /etc/postfix/copyself_gids contain - respectively - lines like:
user#copyself.your.domain.tld 2001
I have done this on my server and it works great for me.

Strict control over the statement_timeout variable in PostgreSQL

Does anybody know how to limit a users ability to set variables? Specifically statement_timeout?
Regardless of if I alter the user to have this variable set to a minute, or if I have it set to a minute in the postgresql.conf file, a user can always just type SET statement_timeount TO 0; to disable the timeout completely for that session.
Does anybody know a way to stop this? I know some variables can only be changed by a superuser but I cannot figure out if there is a way to force this to be one of those controlled variables. Alternatively, is there a way to revoke SET from their role?
In my application, this variable is used to limit the ability of random users (user registration is open to the public) from using up all the CPU time with (near) infinite queries. If they can disable it then it means that I must find a new methodology for limiting resources to users. If there is no method for securing this variable, is there other ways of achieving this same goal that you may suggest?
Edit 2011-03-02
The reason the database is open to the public and arbitrary SQL is allowed is because this project is for a game played directly in the database. Every player is a database user. Data is locked down behind views, rules and triggers, CREATE is revoked from public and the player role to prevent most alterations to the schema and SELECT on pg_proc is removed to secure game-sensitive function code.
This is not some mission critical system I have opened up to the world. It is a weird proof of concept that puts an abnormal amount of trust in the database in an attempt to maintain the entire CIA security triangle within it.
Thanks for your help,
Abstrct
There is no way to override this. If you allow the user to run arbitrary SQL commands, changing the statement_timeout is just the top of the iceberg anyway... If you don't trust your users, you shouldn't let them run arbitrary SQL - or accept that they can run, well, arbitrary SQL. And have some sort of external monitor that cancels the queries.
Basically you can't do this in plain postgres.
Meantime for accomplish your goal you may use some type of proxies and rewrite/forbidd some queries.
There several solutions for that, f.e.:
db-query-proxy - article how it born (in Russian).
BGBouncer + pgbouncer-rr-patch
Last contains very useful examples and it is very simple do on Python:
import re
def rewrite_query(username, query):
q1="SELECT storename, SUM\(total\) FROM sales JOIN store USING \(storeid\) GROUP BY storename ORDER BY storename"
q2="SELECT prodname, SUM\(total\) FROM sales JOIN product USING \(productid\) GROUP BY prodname ORDER BY prodname"
if re.match(q1, query):
new_query = "SELECT storename, SUM(total) FROM store_sales GROUP BY storename ORDER BY storename;"
elif re.match(q2, query):
new_query = "SELECT prodname, SUM(total) FROM product_sales GROUP BY prodname ORDER BY prodname;"
else:
new_query = query
return new_query

Running a login step prior to scenario outline in cucumber

I'm using cucumber with webrat/mechanize to test a PHP site and I'm trying to improve the speed the tests run by avoiding running unnecessary steps.
I want to use a scenario outline to check a whole lot of pages are accessible/protected depending on the user who is logged in:
Scenario Outline: Check page access is secure
Given I am logged in as "<user>"
And I am on <page>
Then I should see "<message>"
Examples:
|user |page |message |
|admin |home page |Welcome to my site |
|admin |admin page|Site administration |
|editor|home page |Welcome to my site |
|editor|admin page|Access denied |
|guest |home page |Please login |
|guest |admin page|Access denied |
...
This works, but given I have 10 roles and hundreds of pages to check, there is a lot of overhead in running the login step every time the outline runs.
I'm wondering if there is a way to run the login step once for each role, then visit each page in turn without needing to login every time. i.e run "login, visit 1, visit 2, visit 3" instead of "login, visit 1, login, visit 2, login, visit 3".
I've tried using hooks, and Background, but can't seem to find an approach that works. Is this possible?
Instead of putting all the information about what is accessible/protected in the feature, consider putting them in the step defs (even better would be to use the definitions in your application, but that isn't easy if your app is not in process)
If you can live with a feature that is as abstract as
Given I am an admin
Then I should be able to access admin pages
Then you can do all the work much more efficiently in step defs
Following is just a code sketch to give some idea of what you can do ...
# step def
module AccessHelper
AdminPages = {
{page: ..., msg: ...
...
}
def login_as ... ; end
def correct_message? msg ...; end
def check_admin_access_for user
#errors = []
login_as #I
AdminPages.each do |page|
visit page[:path]
errors << page unless correct_message?
end
end
end
World(AccessHelper)
Then "I should be able to access admin pages" do
check_admin_access_for #I
#errors.should be_empty
end
You can of course expand this using the full power of ruby to meet you particular needs. The fundamental idea is that you can always take several cucumber actions and abstract them into one cucumber action.
Hope thats useful
You could implement the Given step to only log in once for each role:
# lazily log in each role as needed, and keep the login in a hash table
$logins = Hash.new do |_logins, role|
_logins[role] = do_expensive_login(role)
end
Given /^I am logged in as "([^"]+)"$/ |role|
#login = $logins[role]
end
Of course, if the future steps can change the state of the login, or change the world such that the login is no longer valid, this might hose you down the line, so tread carefully.

Resources