Run Python Script within Python & check if value is outputted - If statement - python-3.x

I have a Python3 script which basically runs through a list of Amazon AWS Account numbers (Uses Boto3), checks to see if their access keys are older than x number of days and report on it.
I'd like to make my report nice by checking to see if the output has a user(s) or not and output this into a file for SNS to email to me.
Here is the code I've already tried:
if not os.system("python3 ListUsersWithAccessKeysOlderThan90Days.py " + accountNumber):
print("No Content", file=reportName)
else:
print("Content", file=reportName)
I've already tried this too:
if os.system("python3 ListUsersWithAccessKeysOlderThan90Days.py " + accountNumber) == " ":
print("No Content", file=reportName)
else:
print("Content", file=reportName)
But I only seem to get this in my output file:
Running on account accountNumber - accountLabel - accountEnvironment
No Content
Running on account accountNumber - accountLabel - accountEnvironment
No Content
Running on account accountNumber - accountLabel - accountEnvironment
No Content
Ideally, I'd like it to look like this:
Running on account accountNumber - accountLabel - accountEnvironment
No Content
Running on account accountNumber - accountLabel - accountEnvironment
Content
Running on account accountNumber - accountLabel - accountEnvironment
No Content
No Content = No access keys need rotating.
Content = User needs their key rotating.
I can achieve this in Bash, but I wouldn't mind trying to get it working in Python3.
Here is my Bash example:
if [[ -z "$(python3 ListUsersWithAccessKeysOlderThan90Days.py ${ACCOUNT})" ]]; then
echo -e "$ACCOUNT ($LABEL) is up to date no need to report\n" >> $REPORT
else
echo -e "$ACCOUNT Need keys rotating" >> $REPORT
fi
Any help would be most appreciated.
Thanks,

You can get the status of IAM users and credentials from the AWS Credentials Report. That would probably satisfy most needs.
If you prefer Python, then I've written a basic script that can be used to print out all IAM users in an account whose access keys are over 90 days old (regardless of when they last used these keys).
import sys
import boto3
from datetime import datetime, timedelta, timezone
DAYS = 90
iam = boto3.client('iam')
sts = boto3.client('sts')
identity = sts.get_caller_identity()
account = identity['Account']
header_printed = False
count = 0
today = datetime.now(timezone.utc)
# Get all IAM users in this AWS account
for user in iam.list_users()['Users']:
arn = user['Arn']
username = user['UserName']
# Get all access keys for this IAM user
keys = iam.list_access_keys(UserName=username)
# Test each key's age and print those that are too old
for key in keys['AccessKeyMetadata']:
akid = key['AccessKeyId']
created = key['CreateDate']
created_delta = today - created
# if this access key is older than DAYS
if created + timedelta(days=DAYS) < today:
count += 1
response = iam.get_access_key_last_used(AccessKeyId=akid)
akid_last_used = response['AccessKeyLastUsed']
if not header_printed:
header_printed = True
print(f'Account, Username, Access Key, Age, Last Used')
print(f'{account}, {username}, {akid}, {created_delta.days} ', end = '')
# Only keys that have actually been used will have last used date
if 'LastUsedDate' in akid_last_used:
last_used = akid_last_used['LastUsedDate']
last_used_delta = today - last_used
print(flast_used_delta.days)
else:
print('none')
sys.exit(count)
This will print out a list of access keys over 90 days, in a CSV format. For example:
Account, Username, Access Key, Age, Last Used
123456784321, james, AKIAJ7PL4POLWNEXAMPLE, 91, 1
123456784321, frank, AKIAL2CV9LKWEXAMPLE, 200, 100
123456784321, mary, AKIAYTWHD3BNMLEXAMPLE, 97, none
The Age is how many days old the access key is. The Last Used is how many days it has been since the credential was last used. Hope this proves to be helpful.
The script's exit code is the count of keys older than 90 days, so you can use this exit code in a shell script to decide what to do next. For example:
#!/bin/bash
python3 scripts_older_than_90days.py > oldkeys.csv
count=$?
if [ $count -eq 0 ]
then
echo "All access keys good"
else
echo "Count of old keys" $count
fi

Related

pandas aggregation based on timestamp threshold

I hope somebody can help me to solve this issue.
I have a csv file structured as follow:
I am trying to group the events based on message, name, userID if the events manifests in a 10min threshold starting from the first event matched.
the output I am expecting from the csv, is to see only 3 rows, because the second and third (as they are in 10min threshold and the message and name and ID are the same, they should be grouped) and have an extra columns name event_count that report how many time that event occurred.like this
I start working on this and my script looks like this:
import csv
import pandas as pd
# 0. sort data by timestamp if not already sorted
file_csv = 'test.csv'
f = pd.read_csv(file_csv)
f['#timestamp'] = pd.to_datetime(f['#timestamp'])
f = f.sort_values('#timestamp')
# lazy groupby
groups = f.groupby(['message','name','userID'])
# 1. compute the time differences `timediff` and compare to threshold
f['timediff'] = groups['#timestamp'].diff() < pd.Timedelta(minutes=10)
# 2. find the blocks with cumsum
f['event_count'] = groups['timediff'].cumsum()
# 3. groupby the blocks
out = (f.groupby(['message','name', 'userID'])
.agg({'#timestamp':'first', 'timediff':'count'})
)
keep_col = ['#timestamp', 'message', 'name', 'userID', 'event_count']
new_f = f[keep_col]
new_f.to_csv("aggregationtest.csv", index=False)
But the aggregation is totally wrong because is grouping all the event together even if they don't fall in the 10min threshold.
I am really struggling to understand what I am doing wrong if somebody can help me to understand the issue
UPDATE:
After some testing I managed to get a closer output to what I am expecting but still wrong.
I did some updated on the out variable as follow
out = (f.groupby(['message','name', 'userID', 'timediff']).agg({'#timestamp':'first','message': 'unique','name': 'unique', 'userID': 'unique', 'timediff': 'count'}))
This bit of code now produce an output that looks like:
But even if its grouping now, the count is wrong. Having this csv file
#timestamp,message,name,userID
2021-07-13 21:36:18,Failed to download file,Failed to download file,admin
2021-07-14 03:46:16,Successful Logon for user "user1",Logon Attempt,1
2021-07-14 03:51:16,Successful Logon for user "user1",Logon Attempt,1
2021-07-14 03:54:16,Successful Logon for user "user1",Logon Attempt,1
2021-07-14 04:55:16,Successful Logon for user "user1",Logon Attempt,1
I am expecting to have the following event_count
1
3
1
But I am getting different out come.
You'll have to somehow identify the different periods within the groups. The solution below gives each period within the group a name, which can then be included in the groupby that generates the count:
import pandas as pd
file_csv = 'test.csv'
f = pd.read_csv(file_csv)
f['#timestamp'] = pd.to_datetime(f['#timestamp'])
f = f.sort_values('#timestamp')
def check(item): #taken from https://stackoverflow.com/a/53189777/11380795
diffs = item - item.shift()
laps = diffs > pd.Timedelta('10 min')
periods = laps.cumsum().apply(lambda x: 'period_{}'.format(x+1))
return periods
#create period names
f['period'] = f.groupby(['message','name','userID'])['#timestamp'].transform(check)
#groupby and count
(f.groupby(['message','name', 'userID', 'period']).agg({'#timestamp':'first', 'period': 'count'})).rename(columns={"period": "timediff"}).reset_index()
Output:
message
name
userID
period
#timestamp
timediff
0
Failed to download file
Failed to download file
admin
period_1
2021-07-13 21:36:18
1
1
Successful Logon for user "user1"
Logon Attempt
1
period_1
2021-07-14 03:46:16
3
2
Successful Logon for user "user1"
Logon Attempt
1
period_2
2021-07-14 04:55:16
1

How to get the list of followers from an Instagram account without getting banned?

I am trying to scrape all the followers of some particular Instagram accounts. I am using Python 3.8.3 and the latest version of Instaloader library. The code I have written is given below:
# Import the required libraries:
import instaloader
import time
from random import randint
# Start time:
start = time.time()
# Create an instance of instaloader:
loader = instaloader.Instaloader()
# Credentials & target account:
user_id = USERID
password = PASSWORD
target = TARGET # Account of which the list of followers need to be scraped;
# Login or load the session:
loader.login(user_id, password)
# Obtain the profile metadata of the target:
profile = instaloader.Profile.from_username(loader.context, target)
# Print the list of followers and save it in a text file:
try:
# The list to store the collected user handles of the followers:
followers_list = []
# Variables used to apply pauses to slow down scraping:
count = 0
short_counter = 1
short_pauser = randint(19, 24)
long_counter = 1
long_pauser = randint(4900, 5000)
# Fetch the followers one by one:
for follower in profile.get_followers():
sleeper = randint(840, 1020)
# Short pause for the process:
if (short_counter % short_pauser == 0):
short_counter = 0
short_pauser = randint(19, 24)
print('\nShort Pause.\n')
time.sleep(1)
# Long pause for the process:
if (long_counter % long_pauser == 0):
long_counter = 0
long_pauser = randint(4900, 5000)
print('\nLong pause.\n')
time.sleep(sleeper)
# Append the list and print the follower's user handle:
followers_list.append(follower.username)
print(count,'', followers_list[count])
# Increment the counters accordingly:
count = count + 1
short_counter = short_counter + 1
long_counter = long_counter + 1
# Store the followers list in a txt file:
txt_file = target + '.txt'
with open(txt_file, 'a+') as f:
for the_follower in followers_list:
f.write(the_follower)
f.write('\n')
except Exception as e:
print(e)
# End time:
end = time.time()
total_time = end - start
# Print the time taken for execution:
print('Time taken for complete execution:', total_time,'s.')
I am getting the following error after scraping some data:
HTTP Error 400 (Bad Request) on GraphQL Query. Retrying with shorter page length.
HTTP Error 400 (Bad Request) on GraphQL Query. Retrying with shorter page length.
400 Bad Request
In fact, the error occurs when Instagram detects unusual activity and disables the account for a while and prompts the user to change the password.
I have tried -
(1) Slowing down the process of scraping.
(2) Adding pauses in between in order to make the program more human-like.
Still, no progress.
How to bypass such restrictions and get the complete list of all the followers?
If getting the entire list is not possible, what is the best way to get at least 20,000 followers list (from multiple accounts) without getting banned / disabled account / facing such inconveniences?

Getting access key age AWS Boto3

I am trying to figure out a way to get a users access key age through an aws lambda function using Python 3.6 and Boto 3. My issue is that I can't seem to find the right api call to use if any exists for this purpose. The two closest that I can seem to find are list_access_keys which I can use to find the creation date of the key. And get_access_key_last_used which can give me the day the key was last used. However neither or others I can seem to find give simply the access key age like is shown in the AWS IAM console users view. Does a way exist to get simply the Access key age?
This simple code do the same stuff without converting a lot of time etc:
import boto3
from datetime import date
client = boto3.client('iam')
username = "<YOUR-USERNAME>"
res = client.list_access_keys(UserName=username)
accesskeydate = res['AccessKeyMetadata'][0]['CreateDate'].date()
currentdate = date.today()
active_days = currentdate - accesskeydate
print (active_days.days)
There is no direct way. You can use the following code snippet to achieve what you are trying:
import boto3, json, time, datetime, sys
client = boto3.client('iam')
username = "<YOUR-USERNAME>"
res = client.list_access_keys(UserName=username)
accesskeydate = res['AccessKeyMetadata'][0]['CreateDate'] ### Use for loop if you are going to run this on production. I just wrote it real quick
accesskeydate = accesskeydate.strftime("%Y-%m-%d %H:%M:%S")
currentdate = time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime())
accesskeyd = time.mktime(datetime.datetime.strptime(accesskeydate, "%Y-%m-%d %H:%M:%S").timetuple())
currentd = time.mktime(datetime.datetime.strptime(currentdate, "%Y-%m-%d %H:%M:%S").timetuple())
active_days = (currentd - accesskeyd)/60/60/24 ### We get the data in seconds. converting it to days
print (int(round(active_days)))
Let me know if this works as expected.
Upon further testing, I've come up with the following which runs in Lambda. This function in python3.6 will email users if their IAM keys are 90 days or older.
Pre-requisites
all IAM users have an email tag with a proper email address as the value.
Example;
IAM user tag key: email
IAM user tag value: someone#gmail.com
every email used, needs to be confirmed in SES
import boto3, os, time, datetime, sys, json
from datetime import date
from botocore.exceptions import ClientError
iam = boto3.client('iam')
email_list = []
def lambda_handler(event, context):
print("All IAM user emails that have AccessKeys 90 days or older")
for userlist in iam.list_users()['Users']:
userKeys = iam.list_access_keys(UserName=userlist['UserName'])
for keyValue in userKeys['AccessKeyMetadata']:
if keyValue['Status'] == 'Active':
currentdate = date.today()
active_days = currentdate - \
keyValue['CreateDate'].date()
if active_days >= datetime.timedelta(days=90):
userTags = iam.list_user_tags(
UserName=keyValue['UserName'])
email_tag = list(filter(lambda tag: tag['Key'] == 'email', userTags['Tags']))
if(len(email_tag) == 1):
email = email_tag[0]['Value']
email_list.append(email)
print(email)
email_unique = list(set(email_list))
print(email_unique)
RECIPIENTS = email_unique
SENDER = "AWS SECURITY "
AWS_REGION = os.environ['region']
SUBJECT = "IAM Access Key Rotation"
BODY_TEXT = ("Your IAM Access Key need to be rotated in AWS Account: 123456789 as it is 3 months or older.\r\n"
"Log into AWS and go to your IAM user to fix: https://console.aws.amazon.com/iam/home?#security_credential"
)
BODY_HTML = """
AWS Security: IAM Access Key Rotation: Your IAM Access Key need to be rotated in AWS Account: 123456789 as it is 3 months or older. Log into AWS and go to your https://console.aws.amazon.com/iam/home?#security_credential to create a new set of keys. Ensure to disable / remove your previous key pair.
"""
CHARSET = "UTF-8"
client = boto3.client('ses',region_name=AWS_REGION)
try:
response = client.send_email(
Destination={
'ToAddresses': RECIPIENTS,
},
Message={
'Body': {
'Html': {
'Charset': CHARSET,
'Data': BODY_HTML,
},
'Text': {
'Charset': CHARSET,
'Data': BODY_TEXT,
},
},
'Subject': {
'Charset': CHARSET,
'Data': SUBJECT,
},
},
Source=SENDER,
)
except ClientError as e:
print(e.response['Error']['Message'])
else:
print("Email sent! Message ID:"),
print(response['MessageId'])
Using the above methods you will only get the age of the access keys. But as a best practice or a security approach, you need to check the rotation period, when the keys are last rotated. If the keys rotation age is more than 90 days you could alert your team.
The only way to get the rotation age of the access keys is by using the credentials report from IAM. Download it, parse it, and calculate the age.

How to disable Create Project permission for users by default in GitLab?

I am using the Omnibus GitLab CE system with LDAP authentication.
Because of LDAP authentication, anyone in my company can sign in to GitLab and a new GitLab user account associated with this user is created (according to my understanding).
I want to modify it so that by default this new user (who can automatically sign in based on his LDAP credentials) cannot create new projects.
Then, I as the admin, will probably handle most new project creation.
I might give the Create Project permission to a few special users.
In newer versions of GitLab >= v7.8 …
This is not a setting in config/gitlab.yml but rather in the GUI for admins.
Simply navigate to https://___[your GitLab URL]___/admin/application_settings/general#js-account-settings, and set Default projects limit to 0.
You can then access individual users's project limit at https://___[your GitLab URL]___/admin/users.
See GitLab's update docs for more settings changed between v7.7 and v7.8.
git diff origin/7-7-stable:config/gitlab.yml.example origin/7-8-stable:config/gitlab.yml.example
For all new users:
Refer to Nick Merrill answer.
For all existing users:
This is the best and quick method to make changes to projects limits:
$ gitlab-rails runner "User.where(projects_limit: 10).each { |u| u.projects_limit = 0; u.save }"
( Update: This applies to versions <= 7.7:)
The default permissions are set in gitlab.yml
In omnibus, that is /opt/gitlab/embedded/service/gitlab-rails/config/gitlab.yml
Look for
## User settings
default_projects_limit: 10
# default_can_create_group: false # default: true
Setting default_projects_limit to zero, and default_can_create_group to false may be what you want.
Then an admin can change the limits for individual users.
Update:
This setting was included in the admin GUI in version 7.8 (see answer by #Nick M). At least with Omnibus on Centos7 an upgrade retains the setting.
Note that the setting default_can_create_group is still in gitlab.yml.
Here's my quick-and-dirty Python script which you can use in case you already have some users created and want to change all your existing users to make them unable to create projects on their own:
#!/usr/bin/env python
import requests
import json
gitlab_url = "https://<your_gitlab_host_and_domain>/api/v3"
headers = {'PRIVATE-TOKEN': '<private_token_of_a_user_with_admin_rights>'}
def set_user_projects_limit_to_zero (user):
user_id = str(user['id'])
put = requests.put(gitlab_url + "/users/" + user_id + "?projects_limit=0", headers=headers)
if put.status_code != 200:
print "!!! change failed with user id=%s, status code=%s" % (user_id, put.status_code)
exit(1)
else:
print "user with id=%s changed!" % user_id
users_processed = 0
page_no = 1
total_pages = 1
print "processing 1st page of users..."
while page_no <= total_pages:
users = requests.get(gitlab_url + "/users?page=" + str(page_no), headers=headers)
total_pages = int(users.headers['X-Total-Pages'])
for user in users.json():
set_user_projects_limit_to_zero(user)
users_processed = users_processed + 1
print "processed page %s/%s..." % (page_no, total_pages)
page_no = page_no + 1
print "no of processed users=%s" % users_processed
Tested & working with GitLab CE 8.4.1 052b38d, YMMV.

ProFTPd MySQL setup

I'm trying to install ProFTPd with MySQL on Ubuntu server 11.10 64-bit. But I cannot login, always showing 'Login Incorrect'
This is my sql.conf file:
# add the following lines to the file (don't need to remove comments from it)
DefaultRoot ~
# The passwords in MySQL are encrypted using CRYPT
SQLBackend mysql
SQLEngine on
SQLAuthTypes Plaintext Crypt
SQLAuthenticate users* groups*
# used to connect to the database
# databasename#host database_user user_password
SQLConnectInfo ftp#localhost proftpd password
# Here we tell ProFTPd the names of the database columns in the "usertable"
# we want it to interact with. Match the names with those in the db
SQLUserInfo ftpuser userid passwd uid gid homedir shell
# Here we tell ProFTPd the names of the database columns in the "grouptable"
# we want it to interact with. Again the names match with those in the db
SQLGroupInfo ftpgroup groupname gid members
# set min UID and GID - otherwise these are 999 each
SQLMinID 500
# create a user's home directory on demand if it doesn't exist
SQLHomedirOnDemand on
# Update count every time user logs in
SQLLog PASS updatecount
SQLNamedQuery updatecount UPDATE "count=count+1, accessed=now() WHERE userid='%u'" ftpuser
# Update modified everytime user uploads or deletes a file
SQLLog STOR,DELE modified
SQLNamedQuery modified UPDATE "modified=now() WHERE userid='%u'" ftpuser
# User quotas
# ===========
QuotaEngine on
QuotaDirectoryTally on
QuotaDisplayUnits Mb
QuotaShowQuotas on
SQLNamedQuery get-quota-limit SELECT "name, quota_type, per_session, limit_type, bytes_in_avail, bytes_out_avail, bytes_xfer_avail, files_in_avail, files_out_avail, files_xfer_avail FROM ftpquotalimits WHERE name = '%{0}' AND quota_type = '%{1}'"
SQLNamedQuery get-quota-tally SELECT "name, quota_type, bytes_in_used, bytes_out_used, bytes_xfer_used, files_in_used, files_out_used, files_xfer_used FROM ftpquotatallies WHERE name = '%{0}' AND quota_type = '%{1}'"
SQLNamedQuery update-quota-tally UPDATE "bytes_in_used = bytes_in_used + %{0}, bytes_out_used = bytes_out_used + %{1}, bytes_xfer_used = bytes_xfer_used + %{2}, files_in_used = files_in_used + %{3}, files_out_used = files_out_used + %{4}, files_xfer_used = files_xfer_used + %{5} WHERE name = '%{6}' AND quota_type = '%{7}'" ftpquotatallies
SQLNamedQuery insert-quota-tally INSERT "%{0}, %{1}, %{2}, %{3}, %{4}, %{5}, %{6}, %{7}" ftpquotatallies
QuotaLimitTable sql:/get-quota-limit
QuotaTallyTable sql:/get-quota-tally/update-quota-tally/insert-quota-tally
RootLogin off
RequireValidShell off
SQLNamedQuery userquota SELECT "IF ((SELECT (#availmbytes:=ROUND((`bytes_in_avail`/1048576),2)) FROM `ftpquotalimits` WHERE `name`='%u') = 0, \"No user quota applies.\", CONCAT(\"User quota: Used \", (SELECT (#usedmbytes:=ROUND((`bytes_in_used`/1048576),2)) FROM `ftpquotatallies` WHERE `name`='%u'), \"MB from \", #availmbytes, \"MB. You have \", ROUND(#availmbytes-#usedmbytes,2), \"MB available space.\"))"
SQLShowInfo LIST "226" "%{userquota}"
PassivePorts 60000 65000
is there anything wrong with those codes??
Does SQLConnectInfo username and password need quote??
maybe if you try to follow this manual.
Ubuntu 12:
https://www.digitalocean.com/community/tutorials/how-to-set-up-proftpd-with-a-mysql-backend-on-ubuntu-12-10
Ubuntu 14:
https://www.howtoforge.com/virtual-hosting-with-proftpd-and-mysql-incl-quota-on-ubuntu-14.04-lts-p2

Resources