Lines of code you have written [closed] - perforce

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Out of curiosity, is there any way to get the number of lines of code you have written (in a specific project)?
I tried perforce with p4 describe #CLN | wc -l, but apart from so many edge cases (comments being included, new lines being added etc.), it skips the newly added files as well. Edge cases can be ignored, if we try to display physical line of code but newly added files still cause the issue.

I went ahead and wrote a Python script that prints out the number of lines of code added/changed by a user and the average number of lines per change.
Tested on Windows with Python 2.7.2. You can run from the command line - it assumes you have p4 in your path.
Usage: codestats.py -u [username]
It works with git too: codestats.py -u [authorname] -g.
It does some blacklisting to prune out bulk adds (e.g. you just added a library), and also imposes a blacklist on certain types of files (e.g. .HTML files, etc.). Otherwise, it works pretty well.
Hope this helps!
########################################################################
# Script that computes the lines of code stats for a perforce/git user.
########################################################################
import argparse
import logging
import subprocess
import sys
import re
VALID_ARGUMENTS = [
("user", "-u", "--user", "Run lines of code computation for the specified user.", 1),
("change", "-c", "--change", "Just display lines of code in the passed in change (useful for debugging).", 1),
("git", "-g", "--git", "Use git rather than perforce (which is the default versioning system queried).", 0)
]
class PrintHelpOnErrorArgumentParser(argparse.ArgumentParser):
def error(self, message):
logging.error("error: {0}\n\n".format(message))
self.print_help()
sys.exit(2)
def is_code_file(depot_path):
fstat_output = subprocess.Popen(['p4', 'fstat', depot_path], stdout=subprocess.PIPE).communicate()[0].split('\n')
text_file = False
head_type_regex = re.compile('^... headType (\S+)\s*$')
for line in fstat_output:
head_type_line = head_type_regex.match(line)
if head_type_line:
head_type = head_type_line.group(1)
text_file = (head_type.find('text') != -1)
if text_file:
blacklisted_file_types = ['html', 'css', 'twb', 'twbx', 'tbm', 'xml']
for file_type in blacklisted_file_types:
if re.match('^\/\/depot.*\.{}#\d+$'.format(file_type), depot_path):
text_file = False
break
return text_file
def parse_args():
parser = PrintHelpOnErrorArgumentParser()
for arg_name, short_switch, long_switch, help, num_args in VALID_ARGUMENTS:
if num_args != 0:
parser.add_argument(
short_switch,
nargs=num_args,
type=str,
dest=arg_name)
else:
parser.add_argument(
long_switch,
short_switch,
action="store_true",
help=help,
dest=arg_name)
return parser.parse_args()
file_edited_regex = re.compile('^... .*?#\d+ edit\s*$')
file_deleted_regex = re.compile('^... .*?#\d+ delete\s*$')
file_integrated_regex = re.compile('^... .*?#\d+ integrate\s*$')
file_added_regex = re.compile('^... (.*?#\d+) add\s*$')
affected_files_regex = re.compile('^Affected files ...')
outliers = [] # Changes that seem as if they weren't hand coded and merit inspection
def num_lines_in_file(depot_path):
lines = len(subprocess.Popen(['p4', 'print', depot_path], stdout=subprocess.PIPE).communicate()[0].split('\n'))
return lines
def parse_change(changelist):
change_description = subprocess.Popen(['p4', 'describe', '-ds', changelist], stdout=subprocess.PIPE).communicate()[0].split('\n')
parsing_differences = False
parsing_affected_files = False
differences_regex = re.compile('^Differences \.\.\..*$')
line_added_regex = re.compile('^add \d+ chunks (\d+) lines.*$')
line_removed_regex = re.compile('^deleted \d+ chunks (\d+) lines.*$')
line_changed_regex = re.compile('^changed \d+ chunks (\d+) / (\d+) lines.*$')
file_diff_regex = re.compile('^==== (\/\/depot.*#\d+)\s*\S+$')
skip_file = False
num_lines_added = 0
num_lines_deleted = 0
num_lines_changed_added = 0
num_lines_changed_deleted = 0
num_files_added = 0
num_files_edited = 0
for line in change_description:
if differences_regex.match(line):
parsing_differences = True
elif affected_files_regex.match(line):
parsing_affected_files = True
elif parsing_differences:
if file_diff_regex.match(line):
regex_match = file_diff_regex.match(line)
skip_file = not is_code_file(regex_match.group(1))
elif not skip_file:
regex_match = line_added_regex.match(line)
if regex_match:
num_lines_added += int(regex_match.group(1))
else:
regex_match = line_removed_regex.match(line)
if regex_match:
num_lines_deleted += int(regex_match.group(1))
else:
regex_match = line_changed_regex.match(line)
if regex_match:
num_lines_changed_added += int(regex_match.group(2))
num_lines_changed_deleted += int(regex_match.group(1))
elif parsing_affected_files:
if file_added_regex.match(line):
file_added_match = file_added_regex.match(line)
depot_path = file_added_match.group(1)
if is_code_file(depot_path):
lines_in_file = num_lines_in_file(depot_path)
if lines_in_file > 3000:
# Anomaly - probably a copy of existing code - discard this
lines_in_file = 0
num_lines_added += lines_in_file
num_files_added += 1
elif file_edited_regex.match(line):
num_files_edited += 1
return [num_files_added, num_files_edited, num_lines_added, num_lines_deleted, num_lines_changed_added, num_lines_changed_deleted]
def contains_integrates(changelist):
change_description = subprocess.Popen(['p4', 'describe', '-s', changelist], stdout=subprocess.PIPE).communicate()[0].split('\n')
contains_integrates = False
parsing_affected_files = False
for line in change_description:
if affected_files_regex.match(line):
parsing_affected_files = True
elif parsing_affected_files:
if file_integrated_regex.match(line):
contains_integrates = True
break
return contains_integrates
#################################################
# Note: Keep this function in sync with
# generate_line.
#################################################
def generate_output_specifier(output_headers):
output_specifier = ''
for output_header in output_headers:
output_specifier += '| {:'
output_specifier += '{}'.format(len(output_header))
output_specifier += '}'
if output_specifier != '':
output_specifier += ' |'
return output_specifier
#################################################
# Note: Keep this function in sync with
# generate_output_specifier.
#################################################
def generate_line(output_headers):
line = ''
for output_header in output_headers:
line += '--' # for the '| '
header_padding_specifier = '{:-<'
header_padding_specifier += '{}'.format(len(output_header))
header_padding_specifier += '}'
line += header_padding_specifier.format('')
if line != '':
line += '--' # for the last ' |'
return line
# Returns true if a change is a bulk addition or a private change
def is_black_listed_change(user, changelist):
large_add_change = False
all_adds = True
num_adds = 0
is_private_change = False
is_third_party_change = False
change_description = subprocess.Popen(['p4', 'describe', '-s', changelist], stdout=subprocess.PIPE).communicate()[0].split('\n')
for line in change_description:
if file_edited_regex.match(line) or file_deleted_regex.match(line):
all_adds = False
elif file_added_regex.match(line):
num_adds += 1
if line.find('... //depot/private') != -1:
is_private_change = True
break
if line.find('... //depot/third-party') != -1:
is_third_party_change = True
break
large_add_change = all_adds and num_adds > 70
#print "{}: {}".format(changelist, large_add_change or is_private_change)
return large_add_change or is_third_party_change
change_header_regex = re.compile('^Change (\d+)\s*.*?\s*(\S+)#.*$')
def get_user_and_change_header_for_change(changelist):
change_description = subprocess.Popen(['p4', 'describe', '-s', changelist], stdout=subprocess.PIPE).communicate()[0].split('\n')
user = None
change_header = None
for line in change_description:
change_header_match = change_header_regex.match(line)
if change_header_match:
user = change_header_match.group(2)
change_header = line
break
return [user, change_header]
if __name__ == "__main__":
log = logging.getLogger()
log.setLevel(logging.DEBUG)
args = parse_args()
user_stats = {}
user_stats['num_changes'] = 0
user_stats['lines_added'] = 0
user_stats['lines_deleted'] = 0
user_stats['lines_changed_added'] = 0
user_stats['lines_changed_removed'] = 0
user_stats['total_lines'] = 0
user_stats['files_edited'] = 0
user_stats['files_added'] = 0
change_log = []
if args.git:
git_log_command = ['git', 'log', '--author={}'.format(args.user[0]), '--pretty=tformat:', '--numstat']
git_log_output = subprocess.Popen(git_log_command, stdout=subprocess.PIPE).communicate()[0].split('\n')
git_log_line_regex = re.compile('^(\d+)\s*(\d+)\s*\S+$')
total = 0
adds = 0
subs = 0
for git_log_line in git_log_output:
line_match = git_log_line_regex.match(git_log_line)
if line_match:
adds += int(line_match.group(1))
subs += int(line_match.group(2))
total = adds - subs
num_commits = 0
git_shortlog_command = ['git', 'shortlog', '--author={}'.format(args.user[0]), '-s']
git_shortlog_output = subprocess.Popen(git_shortlog_command, stdout=subprocess.PIPE).communicate()[0].split('\n')
git_shortlog_line_regex = re.compile('^\s*(\d+)\s+.*$')
for git_shortlog_line in git_shortlog_output:
line_match = git_shortlog_line_regex.match(git_shortlog_line)
if line_match:
num_commits += int(line_match.group(1))
print "Git Stats for {}: Commits: {}. Lines of code: {}. Average Lines Per Change: {}.".format(args.user[0], num_commits, total, total*1.0/num_commits)
sys.exit(0)
elif args.change:
[args.user, change_header] = get_user_and_change_header_for_change(args.change)
change_log = [change_header]
else:
change_log = subprocess.Popen(['p4', 'changes', '-u', args.user, '-s', 'submitted'], stdout=subprocess.PIPE).communicate()[0].split('\n')
output_headers = ['Current Change', 'Num Changes', 'Files Added', 'Files Edited']
output_headers.append('Lines Added')
output_headers.append('Lines Deleted')
if not args.git:
output_headers.append('Lines Changed (Added/Removed)')
avg_change_size = 0.0
output_headers.append('Total Lines')
output_headers.append('Avg. Lines/Change')
line = generate_line(output_headers)
output_specifier = generate_output_specifier(output_headers)
print line
print output_specifier.format(*output_headers)
print line
output_specifier_with_carriage_return = output_specifier + '\r'
for change in change_log:
change_match = change_header_regex.search(change)
if change_match:
user_stats['num_changes'] += 1
changelist = change_match.group(1)
if not is_black_listed_change(args.user, changelist) and not contains_integrates(changelist):
[files_added_in_change, files_edited_in_change, lines_added_in_change, lines_deleted_in_change, lines_changed_added_in_change, lines_changed_removed_in_change] = parse_change(change_match.group(1))
if lines_added_in_change > 5000 and changelist not in outliers:
outliers.append([changelist, lines_added_in_change])
else:
user_stats['lines_added'] += lines_added_in_change
user_stats['lines_deleted'] += lines_deleted_in_change
user_stats['lines_changed_added'] += lines_changed_added_in_change
user_stats['lines_changed_removed'] += lines_changed_removed_in_change
user_stats['total_lines'] += lines_changed_added_in_change
user_stats['total_lines'] -= lines_changed_removed_in_change
user_stats['total_lines'] += lines_added_in_change
user_stats['files_edited'] += files_edited_in_change
user_stats['files_added'] += files_added_in_change
current_output = [changelist, user_stats['num_changes'], user_stats['files_added'], user_stats['files_edited']]
current_output.append(user_stats['lines_added'])
current_output.append(user_stats['lines_deleted'])
if not args.git:
current_output.append('{}/{}'.format(user_stats['lines_changed_added'], user_stats['lines_changed_removed']))
current_output.append(user_stats['total_lines'])
current_output.append(user_stats['total_lines']*1.0/user_stats['num_changes'])
print output_specifier_with_carriage_return.format(*current_output),
print
print line
if len(outliers) > 0:
print "Outliers (changes that merit inspection - and have not been included in the stats):"
outlier_headers = ['Changelist', 'Lines of Code']
outlier_specifier = generate_output_specifier(outlier_headers)
outlier_line = generate_line(outlier_headers)
print outlier_line
print outlier_specifier.format(*outlier_headers)
print outlier_line
for change in outliers:
print outlier_specifier.format(*change)
print outlier_line

The other answers seem to have missed the source-control history side of things.
From http://forums.perforce.com/index.php?/topic/359-how-many-lines-of-code-have-i-written/
Calculate the answer in multiple steps:
1) Added files:
p4 filelog ... | grep ' add on .* by <username>'
p4 print -q foo#1 | wc -l
2) Changed files:
p4 describe <changelist> | grep "^>" | wc -l
Combine all the counts together (scripting...), and you'll have a total.
You might also want to get rid of whitespace lines, or lines without alphanumeric chars, with a grep?
Also if you are doing it regularly, it would be more efficient to code the thing in P4Python and do it incrementally - keeping history and looking at only new commits.

Yes, there are many ways to count lines of code.
tl;dr Install Eclipse Metrics Plugin. Here is the instruction how to do it. Below there is a short script if you want to do it without Eclipse.
Shell script
I will present you quite general approach. It works on Linux, however it's portable to other systems. Save this 2 lines to lines.sh file:
#!/bin/sh
find -name "*.java" | awk '{ system("wc "$0) }' | awk '{ print $1 "\t" $4; lines += $1; files++ } END { print "Total: " lines " lines in " files " files."}'
It's a shell script which uses find, wc and great awk. Add permission to execute:
chmod +x lines.sh
Now we can execute our shell script.
Let's say you saved lines.sh in /home/you/workspace/projectX.
Script counts lines in .java files, which are located in subdirectories of /home/you/workspace/projectX.
So let's run it with ./lines.sh. You can change *.java for any other types of files.
Sample output:
adam#adam ~/workspace/Checkers $ ./lines.sh
23 ./src/Checkers.java
14 ./src/event/StartGameEvent.java
38 ./src/event/YourColorEvent.java
52 ./src/event/BoardClickEvent.java
61 ./src/event/GameQueue.java
14 ./src/event/PlayerEscapeEvent.java
14 ./src/event/WaitEvent.java
16 ./src/event/GameEvent.java
38 ./src/event/EndGameEvent.java
38 ./src/event/FakeBoardEvent.java
127 ./src/controller/ServerThread.java
14 ./src/controller/ServerConfig.java
46 ./src/controller/Server.java
170 ./src/controller/Controller.java
141 ./src/controller/ServerNetwork.java
246 ./src/view/ClientNetwork.java
36 ./src/view/Messages.java
53 ./src/view/ButtonField.java
47 ./src/view/ViewConfig.java
32 ./src/view/MainWindow.java
455 ./src/view/View.java
36 ./src/view/ImageLoader.java
88 ./src/model/KingJump.java
130 ./src/model/Cords.java
70 ./src/model/King.java
77 ./src/model/FakeBoard.java
90 ./src/model/CheckerMove.java
53 ./src/model/PlayerColor.java
73 ./src/model/Checker.java
201 ./src/model/AbstractPiece.java
75 ./src/model/CheckerJump.java
154 ./src/model/Model.java
105 ./src/model/KingMove.java
99 ./src/model/FieldType.java
269 ./src/model/Board.java
56 ./src/model/AbstractJump.java
80 ./src/model/AbstractMove.java
82 ./src/model/BoardState.java
Total: 3413 lines in 38 files.

Find an app to calculate the lines, there are many subtleties to counting lines - comments, blank lines, multiple operators per line etc.
Visual Studio has "Calculate Code Metrics" functionality, since you're not mentioning one single language I can't be more specific about which tool to use, just saying "find" and "grep" may not be the way to go.
Also consider the fact that lines of code don't measure actual progress. Completed features on your roadmap measures progress and the lower the lines of code - the better. It wouldn't be a first if a proud developer claims his 60,000 lines of code are marvelous only to find out there's a way to do the same thing in 1000 lines.

Have a look at SLOCCount. It only counts actual lines of code and performs some additional computations as well.
On OSX, you can easily install it via Homebrew with brew install sloccount.
Sample output for a project of mine:
$ sloccount .
Have a non-directory at the top, so creating directory top_dir
Adding /Users/padde/Desktop/project/./Gemfile to top_dir
Adding /Users/padde/Desktop/project/./Gemfile.lock to top_dir
Adding /Users/padde/Desktop/project/./Procfile to top_dir
Adding /Users/padde/Desktop/project/./README to top_dir
Adding /Users/padde/Desktop/project/./application.rb to top_dir
Creating filelist for config
Adding /Users/padde/Desktop/project/./config.ru to top_dir
Creating filelist for controllers
Creating filelist for db
Creating filelist for helpers
Creating filelist for models
Creating filelist for public
Creating filelist for tmp
Creating filelist for views
Categorizing files.
Finding a working MD5 command....
Found a working MD5 command.
Computing results.
SLOC Directory SLOC-by-Language (Sorted)
256 controllers ruby=256
66 models ruby=66
10 config ruby=10
9 top_dir ruby=9
5 helpers ruby=5
0 db (none)
0 public (none)
0 tmp (none)
0 views (none)
Totals grouped by language (dominant language first):
ruby: 346 (100.00%)
Total Physical Source Lines of Code (SLOC) = 346
Development Effort Estimate, Person-Years (Person-Months) = 0.07 (0.79)
(Basic COCOMO model, Person-Months = 2.4 * (KSLOC**1.05))
Schedule Estimate, Years (Months) = 0.19 (2.28)
(Basic COCOMO model, Months = 2.5 * (person-months**0.38))
Estimated Average Number of Developers (Effort/Schedule) = 0.34
Total Estimated Cost to Develop = $ 8,865
(average salary = $56,286/year, overhead = 2.40).
SLOCCount, Copyright (C) 2001-2004 David A. Wheeler
SLOCCount is Open Source Software/Free Software, licensed under the GNU GPL.
SLOCCount comes with ABSOLUTELY NO WARRANTY, and you are welcome to
redistribute it under certain conditions as specified by the GNU GPL license;
see the documentation for details.
Please credit this data as "generated using David A. Wheeler's 'SLOCCount'."

There is an easier way to do all this, which incidentally is faster than using grep:
First get all the change lists for a particular user, this is a commandline command you can use it in python script by using os.system():
p4 changes -u <username> > 'some_text_file.txt'
Now you need to extract all the changelists number so ,we will use regex for it, here it is done using python :
f = open('some_text_file.txt','r')
lists = f.readlines()
pattern = re.compile(r'\b[0-9][0-9][0-9][0-9][0-9][0-9][0-9]\b')
labels = []
for i in lists:
labels.append(pattern.findall(i))
changelists = []
for h in labels:
if(type(h) is list):
changelists.append(str(h[0]))
else:
changelists.append(str(h))
Now that you have all the changelists numbers in 'labels'.
We will iterate through the list and for every changelist find number of lines added and number of lines deleted, getting the ultimate difference would give us total number of lines added. The following liens of code do exactly that:
for i in changelists:
os.system('p4 describe -ds '+i+' | findstr "^add" >> added.txt')
os.system('p4 describe -ds '+i+' | findstr "^del" >> deleted.txt')
added = []
deleted = []
file = open('added.txt')
for i in file:
added.append(i)
count = []
count_added = 0
count_add = 0
count_del = 0
for j in added:
count = [int(s) for s in j.split() if s.isdigit()]
count_add += count[1]
count = []
file = open('deleted.txt')
for i in file:
deleted.append(i)
for j in labels:
count = [int(s) for s in j.split() if s.isdigit()]
count_del += count[1]
count = []
count_added = count_add - count_del
print count_added
count_added will have number of lines that were added by the user.

Related

Python skip the lines which do not have any of starting line in the output

I am trying to write a code after getting help from google and So to parse a command output but still getting some problem, as the output what i am expecting continuous there line starting with dn , instance and tag but somehow the very first output only contains dn and tag So, i want those line which do not have all these three starting strings then just skip those, as i am learning so not getting the idea to do that.
Below is my code:
import subprocess as sp
p = sp.Popen(somecmd, shell=True, stdout=sp.PIPE)
stout = p.stdout.read().decode('utf8')
output = stout.splitlines()
startline = ["instance:", "tag"]
for line in output:
print(line)
Script output:
dn: ou=People,ou=pti,o=pt
tag: pti00631
dn: cn=pti00857,ou=People,ou=pti,o=pt
instance: Jassu Lal
tag: pti00857
dn: cn=pti00861,ou=People,ou=pti,o=pt
instance: Gatti Lal
tag: pti00861
Desired output:
dn: cn=pti00857,ou=People,ou=pti,o=pt
instance: Jassu Lal
tag: pti00857
dn: cn=pti00861,ou=People,ou=pti,o=pt
instance: Gatti Lal
tag: pti00861
Assuming your output always the same, your loop can look like this:
lines_to_skip = 3
skip_lines = False
skipped_lines = 0
for line in output():
if "dn: " in line and not "dn: cn" in line:
skip_lines = True
if skip_lines:
if skipped_lines < lines_to_skip:
skipped_lines += 1
continue
if skipped_lines == lines_to_skip:
skip_lines = False
skipped_lines = 0
print(line)
It will check if there is a dn without the cn, counts to 3 (or rather lines_to_skip) and starts outputting when it's reached the lines to skip.
It's a pretty hacky solution but the best one I could come up with for the given context
The below code is flexible. You only need to add the tags in the necessary_tags dictionary without which you do not want to print. It can be more than 3 as well. It also accounts for situations when you receive a particular tag more than once.
import subprocess as sp
p = sp.Popen(somecmd, shell=True, stdout=sp.PIPE)
stout = p.stdout.read().decode('utf8')
output = stout.splitlines()
output.append("")
necessary_tags = {'dn':0, 'instance':0, 'tag':0}
temp_output = []
for line in (output):
tag = line.split(':')[0].strip()
if necessary_tags.get(tag, -1) != -1:
necessary_tags[tag] += 1
temp_output.append(line)
elif line == "":
if all(necessary_tags.values()):
for out in temp_output:
print(out)
temp_output = []
necessary_tags.update({}.fromkeys(necessary_tags,0))
print()

list index out of range but it seems impossible since it's only after 3 questions

kanji = ['上','下','大','工','八','入','山','口','九','一','人','力','川','七','十','三','二','女',]
reading = ['じょう','か','たい','こう','はち','にゅう','さん','こう','く','いち','にん','りょく','かわ','しち','じゅう','さん','に','じょ']
definition = ['above','below','big','construction','eight','enter','mountain','mouth','nine','one','person','power','river','seven','ten','three','two','woman']
score = number_of_questions = kanji_item = 0
def question_format(prompt_type,lang,solution_selection):
global reading,definition,score,num_of_questions,kanji_item
question_prompt = 'What is the '+str(prompt_type)+' for "'+str(kanji[kanji_item])+'"? (Keyboard:'+str(lang)+')\n'
solution_selection = [reading,definition]
usr = input(question_prompt)
if usr in solution_selection[kanji_item] and kanji[kanji_item]:
score += 1
num_of_questions += 1
else:
pass
kanji_item += 1
while number_of_questions != 18:
question_format('READING','Japanese',[0])
print('You got ',score,'/',number_of_questions)
while number_of_questions != 36:
question_format('DEFINITION','English',[1])
print('You got ',score,'/',number_of_questions)
I can't get past 大. but I can't see where it's messing up. I've tried to change pretty much everything. "kanji_item" is supposed to give a common index number so that the answers can match up. It gets through the first two problems with no hassle, but for some reason refuses to accept my third problem.
Problems:
- wrong name using number_of_questions vs. num_of_questions
- wrong way to check truthyness if usr in solution_selection[kanji_item] and kanji[kanji_item]: - the last part is always True as it is a non empty string
- lots of globals wich is not considered very good style
It would be easier to zip your three list together so you get tuples of (kanji, reading, description) and feed 2 of those into your function depending on what you want to test. You do this 2 times, once for reading, once for description.
You can even randomize your list of tuples to get different "orders" in which questions are asked:
kanji = ['上', '下', '大', '工', '八', '入', '山', '口', '九', '一' , '人',
'力', '川', '七', '十', '三', '二', '女',]
reading = ['じょう', 'か', 'たい', 'こう', 'はち', 'にゅう', 'さん', 'こう', 'く',
'いち', 'にん', 'りょく', 'かわ', 'しち', 'じゅう', 'さん', 'に', 'じょ']
definition = ['above', 'below', 'big', 'construction', 'eight', 'enter', 'mountain',
'mouth', 'nine', 'one', 'person', 'power', 'river', 'seven', 'ten', 'three',
'two', 'woman']
import random
data = list(zip(kanji, reading, definition))
random.shuffle(data)
def question_format(prompt_type, lang, kanji, solution):
"""Creates a question about *kanji* - the correct answer is *solution*
Returns 1 if correct else 0."""
question_prompt = f'What is the {prompt_type} for {kanji}? (Keyboard: {lang})'
usr = input(question_prompt)
if usr == solution:
return 1
else:
return 0
questions_asked = 0
correct = 0
for (kanji, reading, _) in data:
correct += question_format('READING','Japanese', kanji, reading)
questions_asked += 1
print('You got ',correct,'/',questions_asked)
for (kanji, _, definition) in data:
correct += question_format('DEFINITION','ENGLISH', kanji, definition)
questions_asked += 1
print('You got ',correct,'/',questions_asked)
After zipping our list and shuffling them data looks like
[('山', 'さん', 'mountain'), ('女', 'じょ', 'woman'), ('力', 'りょく', 'power'),
('上', 'じょう', 'above'), ('九', 'く', 'nine'), ('川', 'かわ', 'river'),
('入', 'にゅう', 'enter'), ('三', 'さん', 'three'), ('口', 'こう', 'mouth'),
('二', 'に', 'two'), ('人', 'にん', 'person'), ('七', 'しち', 'seven'),
('一', 'いち', 'one'), ('工', 'こう', 'construction'), ('下', 'か', 'below'),
('八', 'はち', 'eight'), ('十', 'じゅう', 'ten'), ('大', 'たい', 'big')]

Running MPI python script in MPI azure ml pipeline

I'm trying to run distributed python job through azure ML pipelines using MPIStep pipeline class, by referring to the below example link - https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/pipeline-style-transfer/pipeline-style-transfer.ipynb
I tried implemented the same but even I change the node count parameter in MpiStep class, while running the script the it shows size (i.e comm.Get_size()) as 1 always. Can you please help me in what I'm missing here. Is there any specific setup required on the cluster?
Code snippets:
Pipeline code snippet:
model_dir = model_ds.path('./'+saved_model_blob+'/',data_reference_name='saved_model_path').as_mount()
label_dir = model_ds.path('./'+model_label_blob+'/',data_reference_name='model_label_blob').as_mount()
input_images = result_ds.path('./'+score_blob_name+'/',data_reference_name='Input_images').as_mount()
output_container = 'abc'
inti_container = 'xyz'
distributed_batch_score_step = MpiStep(
name="batch_scoring",
source_directory=SCRIPT_FOLDER,
script_name="batch_scoring_script_mpi.py",
arguments=["--dataset_path", input_images,
"--model_name", model_dir,
"--label_dir", label_dir,
"--intermediate_data_container", inti_container,
"--output_container", output_container],
compute_target=gpu_cluster,
inputs=[input_images, model_dir,label_dir],
pip_packages=["tensorflow","tensorflow-gpu==1.13.1","pillow","azure-keyvault","azure-storage-blob"],
conda_packages=["mesa-libgl-cos6-x86_64","mpi4py==3.0.2","opencv=3.4.2","scikit-learn=0.21.2"],
use_gpu=True,
allow_reuse = False,
node_count = nodecount_param,
process_count_per_node = 1
)
Python Script code snippet:
def run(input_dataset,comm):
rank = comm.Get_rank()
size = comm.Get_size()
print("Rank:" , rank)
print("Size:", size) # shows always 1, even the input node count is >1
print(MPI.Get_processor_name())
file_names = get_file_names(args.dataset_path)
sorted(file_names)
partition_size = len(file_names) // size
print("partition_size-->",partition_size)
partitioned_filenames = file_names[rank * partition_size: (rank + 1) * partition_size]
print("RANK {} - is processing {} images out of the total {}".format(rank, len(partitioned_filenames),
len(file_names)))
# call to Function 01
# call to Function 02
img_names = score_df['image_name'].unique()
output_batch = pd.DataFrame()
for i in img_names:
# call to Function 3
output_batch = output_batch.append(pp_output, ignore_index=True)
output_paths_list = comm.gather(output_batch, root=0)
print("RANK {} - number of pre-aggregated output files {}".format(rank, len(output_batch)))
print("saved in", currentDT + '\\' + 'data.csv')
if rank == 0:
print("RANK {} - number of aggregated output files {}".format(rank, len(output_paths_list)))
print("RANK {} - end".format(rank))
if __name__ == "__main__":
with tf.device('/GPU:0'):
init()
comm = MPI.COMM_WORLD
run(args.dataset_path,comm)
Got to know the issue is due to package version, earlier it is installed via conda with conda_packages=["mpi4py==3.0.2"], it worked after changing the install through pip - pip_packages=["mpi4py"]

How to pick out the second to last line from a telnet command

like the many other threads I've opened, I am trying to create a multi-feature instant replay system utilizing the blackmagic hyperdeck which operates over Telnet. The current feature I am trying to implement is an in-out replay which requires storing two timecode variables in the format of hh:mm:ss;ff where h=hours, m=minutes, s=seconds, and f=frames #30fps. the telnet command for this is transport info, and the response returns 9 lines of which I only want the timecode from the 7th. Any idea on how to do this, as it is way out of my league?
status: stopped
speed: 0
slot id: 1
clip id: 1
single clip: false
display timecode: 00:00:09;22
timecode: 00:00:09;22
video format: 1080i5994
loop: false
Here's ideally what I would like it to look like
import telnetlib
host = "192.168.1.13" #changes for each device
port = 9993 #specific for hyperdecks
timeout = 10
session = telnetlib.Telnet(host, port, timeout)
def In():
session.write(b"transport info \n")
line = session.read_until(b";00",.5)
print(line)
#code to take response and store given line as variable IOin
def out():
session.write(b"transport info \n")
line = session.read_until(b";00",.5)
print(line)
#code to take response and store given line as variable IOout
def IOplay():
IOtc = "playrange set: in: " + str(IOin) + " out: " + str(IOout) + " \n"
session.write( IOtc.encode() )
speed = "play: speed: " + str(Pspeed.get() ) + "\n"
session.write(speed.encode() )
For the most part here's what I got to at least partially work
TCi = 1
TCo = 1
def In():
global TCi
session.write(b"transport info \n")
by = session.read_until(b";00",.5)
print(by)
s = by.find(b"00:")
TCi = by[s:s+11]
def Out():
global TCo
session.write(b"transport info \n")
by = session.read_until(b";00",.5)
print(by)
s = by.find(b"00:")
TCo = by[s:s+11]
def IOplay():
IOtc = "playrange set: in: " + str(TCi) + " out: " + str(TCo) + " \n"
print(IOtc.encode() )
session.write(IOtc.encode() )
speed = "play: speed: 2 \n"
session.write(speed.encode() )
except that its encoding as
b"playrange set: in: b'00:00:01;11' out: b'00:00:03;10' \n"
rather than
"playrange set: in: 00:00:01;11 out: 00:00:03;10 \n"
I need to get rid of the apostrophe's and b prefix in front of the variables
Any ideas?
def get_timecode(text):
tc = ''
lines = text.split('\r\n')
for line in lines:
var, val = line.split(': ', maxsplit=1)
if var == 'timecode':
tc = val
return tc
You could choose to go directly to lines[6], without scanning,
but that would be more fragile if client got out of sync with server,
or if server's output formatting changed in a later release.
EDIT:
You wrote:
session.write(b"transport info \n")
#code to take response and store given line as variable IOin
You don't appear to be reading anything from the session.
I don't use telnetlib, but the docs suggest you'll
never obtain those nine lines of text if you don't do something like:
expect = b"foo" # some prompt string returned by server that you never described in your question
session.write(b"transport info\n")
bytes = session.read_until(expect, timeout)
text = bytes.decode()
print(text)
print('Timecode is', get_timecode(text))

How to convert cmudict-0.7b or cmudict-0.7b.dict in to FST format to use it with phonetisaurus?

I am looking for a simple procedure to generate FST (finite state transducer) from cmudict-0.7b or cmudict-0.7b.dict, which will be used with phonetisaurus.
I tried following set of commands (phonetisaurus Aligner, Google NGramLibrary and phonetisaurus arpa2wfst) and able to generate FST but it didn't work. I am not sure where I did a mistake or miss any step. I guess very first command ie phonetisaurus-align, is not correct.
phonetisaurus-align --input=cmudict.dict --ofile=cmudict/cmudict.corpus --seq1_del=false
ngramsymbols < cmudict/cmudict.corpus > cmudict/cmudict.syms
/usr/local/bin/farcompilestrings --symbols=cmudict/cmudict.syms --keep_symbols=1 cmudict/cmudict.corpus > cmudict/cmudict.far
ngramcount --order=8 cmudict/cmudict.far > cmudict/cmudict.cnts
ngrammake --v=2 --bins=3 --method=kneser_ney cmudict/cmudict.cnts > cmudict/cmudict.mod
ngramprint --ARPA cmudict/cmudict.mod > cmudict/cmudict.arpa
phonetisaurus-arpa2wfst-omega --lm=cmudict/cmudict.arpa > cmudict/cmudict.fst
I tried fst with phonetisaurus-g2p as follows:
phonetisaurus-g2p --model=cmudict/cmudict.fst --nbest=3 --input=HELLO --words
But it didn't return anything....
Appreciate any help on this matter.
It is very important to keep dictionary in the right format. Phonetisaurus is very sensitive about that, it requires word and phonemes to be tab separated, spaces would not work then. It also does not allow pronunciation variant numbers CMUSphinx uses like (2) or (3). You need to cleanup dictionary with simple python script for example before feeding it into phonetisaurus. Here is the one I use:
#!/usr/bin/python
import sys
if len(sys.argv) != 3:
print "Split the list on train and test sets"
print
print "Usage: traintest.py file split_count"
exit()
infile = open(sys.argv[1], "r")
outtrain = open(sys.argv[1] + ".train", "w")
outtest = open(sys.argv[1] + ".test", "w")
cnt = 0
split_count = int(sys.argv[2])
for line in infile:
items = line.split()
if items[0][-1] == ')':
items[0] = items[0][:-3]
if items[0].find("_") > 0:
continue
line = items[0] + '\t' + " ".join(items[1:]) + '\n'
if cnt % split_count == 3:
outtest.write(line)
else:
outtrain.write(line)
cnt = cnt + 1

Resources