enter image description hereI have a menu that has multiple frame ranges for different objects being animated. I need that menu to save that frame range everytime its closed and reopened. Is it possible to save out data to an external file.
There are many ways to save out data externally. Probably one of the easiest ways is using the json module:
import os
import json
path = "PATH/TO/YOUR/FILE/data.json"
def save_data(frame_range):
with open(path, "w") as f:
f.write(json.dumps(frame_range))
def load_data():
if os.path.exists(path):
with open(path, "r") as f:
return json.loads(f.read())
save_data([1, 100])
stored_range = load_data()
print stored_range
# Output: [1, 100]
In this case it's dumping a list, but it supports much more (dictionaries, nested data structures)
Another alternative is saving data with the pickle module:
import pickle
path = "PATH/TO/YOUR/FILE/data.p"
def save_data(frame_range):
with open(path, "w") as f:
f.write(pickle.dumps(frame_range))
save_data([1, 100])
You can also use cpickle to export as a binary format.
In Maya itself, you can save settings directly to the user's preferences:
cmds.optionVar(iv=("frameStart", 1))
cmds.optionVar(iv=("frameEnd", 100))
You can also simply store a json string directly in cmds.optionVar for more complex data structures.
import json
def saveS ():
startFrame = cmds.textField ('one', q= True, text = True)
endFrame = cmds.textField ('two', q= True, text = True)
frame= {}
frame["start"] = startFrame
frame["end"] = endFrame
file2 = open ("maya/2018/scripts/test_pickle/dataCopy.json", "w")
json.dump (frame, file2)
file2.close()
def helpMenu():
if(cmds.window('window1_ui',q=True,ex=True)):cmds.deleteUI('window1_ui')
cmds.window('window1_ui')
cmds.columnLayout(adj=True)
cmds.checkBox( label='Default' )
cmds.textField('one', cc= 'saveS ()')
cmds.textField('two', cc= 'saveS ()')
json_file = open ("maya/2018/scripts/test_pickle/dataCopy.json", "r")
frame = json.load(json_file)
json_file.close()
ted=frame["start"]
cmds.textField ('one', edit= True, text = ted)
fred=frame["end"]
cmds.textField ('two', edit= True, text = fred)
cmds.showWindow('window1_ui')
helpMenu()
This is what I came up with. It works Thanks for your help #GreenCell.
Related
in these days I asked a couple of questions related to string encode and file encode in Python but the real problem is harder then I supposed to be and now I have a clearer understanding of the problem.
Windows encoded text files in different formats depending by the language or the language group. So, I would be able to read a ini (text file) encoded in different formats because some keys contain strings that I need to display on forms and menues on the screen.
So I would like to write something (that has to work with any text file and encode format) similar to this code example:
from configparser import ConfigParser
import magic
def detect(iniFileName):
return magic.Magic(mime_encoding=True).from_file(iniFileName)
#---------------------------------------------------------------------------
encoding = detect('config.ini')
ini = ConfigParser()
ini.read('config.ini', encoding)
title = ini.get('USERENTRY-FORM', 'TITLE')
'''
then title is passed to the tk form
'''
UserEntryForm.setTitle(title)
if _DEBUG == True:
print("USERENTRY-FORM title=",title)
This is the new solution that seems to work better because recognizes better the encode format, AIniParser is a wrapper class arund ConfigParser.
from chardet import detect
def chardetPrint(filename):
text = open(filename, 'rb').read()
print(filename,": ",detect(text))
#---------------------------------------------------------------------------
def chardet(filename):
text = open(filename, 'rb').read()
print(filename,": ",detect(text)['encoding']) # only for test
return detect(text)['encoding']
if __name__ == '__main__':
from ainiparser import AIniParser
def testIniRead(filename, section, key):
with open(filename, "rb") as f:
line = f.readline()
print("line 1: ", line)
encode = chardet(filename)
ini = AIniParser(filename, encoding=encode)
ini._open()
text = ini.getString(section, key)
print(text)
def main():
testIniRead("../cdata/config-cro-16.ini", "CGUIINTF", "game-turn-text")
testIniRead("../bat/output-lva-16/idata/config-lva-16.ini", "CGUIINTF", "game-turn-text")
testIniRead("../idata/config-lva-16.ini", "CGUIINTF", "game-turn-text")
testIniRead("../idata/domande.ini", "D2", "B")
#---------------------------------------------------------------------------
main()
#---------------------------------------------------------------------------
#---------------------------------------------------------------------------
#---------------------------------------------------------------------------
#---------------------------------------------------------------------------
This solution seems recognizes better how files are encoded but I am still testing it.
I have a csv file of several thousands of rows in multiple languages and I am thinking of using google cloud translate API to translate foreign language text into English. I have used a simple code to find out if everything works properly and the code is running smoothly.
from google.cloud import translate_v2 as translate
from time import sleep
from tqdm.notebook import tqdm
import multiprocessing as mp
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "file path.py"
translate_client = translate.Client()
text = "Good Morning, My Name is X."
target ="ja"
output = translate_client.translate(text, target_language=target)
print(output)
I want to now import csv file (using pandas) and translate the text and save the output as a csv file. But don't know how should I do that. Most of the examples I found stop at translating sample text just like above.
Can anyone suggest how can I do this?
To translate the text in csv file and save the output in same CSV file using Google Cloud Translation API, you can use below code:
import csv
from pathlib import Path
def translate_text(target, text):
"""Translates text into the target language.
Target must be an ISO 639-1 language code.
See https://g.co/cloud/translate/v2/translate-reference#supported_languages
"""
import six
from google.cloud import translate_v2 as translate
translate_client = translate.Client()
if isinstance(text, six.binary_type):
text = text.decode("utf-8")
# Text can also be a sequence of strings, in which case this method
# will return a sequence of results for each text.
result = translate_client.translate(text, target_language=target)
# print(u"Text: {}".format(result["input"]))
# print(u"Translation: {}".format(result["translatedText"]))
# print(u"Detected source language: {}".format(result["detectedSourceLanguage"]))
return result["translatedText"]
def main(input_file, translate_to):
"""
Translate a text file and save as a CSV file
using Google Cloud Translation API
"""
input_file_path = Path(input_file)
target_lang = translate_to
output_file_path = input_file_path.with_suffix('.csv')
with open(input_file_path) as f:
list_lines = f.readlines()
total_lines = len(list_lines)
with open(output_file_path, 'w') as csvfile:
my_writer = csv.writer(csvfile, delimiter=',', quotechar='"')
my_writer.writerow(['id', 'original_text', 'translated_text'])
for i, each_line in enumerate(list_lines):
line_id = f'{i + 1:04}'
original_text = each_line.strip('\n') # Strip for the writer(*).
translated_text = translate_text(
target=target_lang,
text=each_line)
my_writer.writerow([line_id, original_text, translated_text]) # (*)
# Progress monitor, non-essential.
print(f"""
{line_id}/{total_lines:04}
{original_text}
{translated_text}""")
if __name__ == '__main__':
origin_file = input('Input text file? >> ')
output_lang = input('Output language? >> ')
main(input_file=origin_file,
translate_to=output_lang)
Example:
Translated text in input file to target language “es”, the output got stored in the same csv file.
Input:
new.csv
How are you doing,Is everything fine there
Do it today
Output:
new.csv
id,original_text,translated_text
0001,"How are you doing,Is everything fine there",¿Cómo estás? ¿Está todo bien allí?
0002,Do it today,Hazlo hoy
I am analyzing xml-structured Textfiles about insider dealings. I wrote some code to parse through the XML-structure and write my output in a CSV file. The results of the files are written per line and the analyzed information is written in individual columns. But in some files information is present in multiple times and my code override the information in the cells, in the end only one date is in the cell of my CSV-File.
import csv
import glob
import re
import string
import time
import bs4 as bs
# User defined directory for files to be parsed
TARGET_FILES = r'D:\files\'
# User defined file pointer to LM dictionary
# User defined output file
OUTPUT_FILE = r'D:\ouput\Parser.csv'
# Setup output
OUTPUT_FIELDS = [r'Datei', 'transactionDate', r'transactionsCode', r'Director', r'Officer', r'Titel', r'10-% Eigner', r'sonstiges', r'SignatureDate']
def main():
f_out = open(OUTPUT_FILE, 'w')
wr = csv.writer(f_out, lineterminator='\n', delimiter=';')
wr.writerow(OUTPUT_FIELDS)
file_list = glob.glob(TARGET_FILES)
for file in file_list:
print(file)
with open(file, 'r', encoding='UTF-8', errors='ignore') as f_in:
soup = bs.BeautifulSoup(f_in, 'xml')
output_data = get_data(soup)
output_data[0] = file
wr.writerow(output_data)
def get_data(soup):
# overrides the transactionDate if more than one transactions disclosed on the current form
# the number determine the column for the output
_odata = [0] * 9
try:
for item in soup.find_all('transactionDate'):
_odata[1] = item.find('value').text
except AttributeError:
_odata[1] = ('keine Angabe')
try:
for item in soup.find_all('transactionAcquiredDisposedCode'):
_odata[2] = item.find('value').text
except AttributeError:
_odata[2] = 'ka'
for item in soup.find_all('reportingOwnerRelationship'):
try:
_odata[3] = item.find('isDirector').text
except AttributeError:
_odata[3] = ('ka')
try:
_odata[4] = item.find('isOfficer').text
except AttributeError:
_odata[4] = ('ka')
try:
_odata[5] = item.find('officerTitle').text
except AttributeError:
_odata[5] = 'ka'
try:
_odata[6] = item.find('isTenPercentOwner').text
except AttributeError:
_odata[6] = ('ka')
try:
_odata[7] = item.find('isOther').text
except AttributeError:
_odata[7] = ('ka')
try:
for item in soup.find_all('ownerSignature'):
_odata[8] = item.find('signatureDate').text
except AttributeError:
_odata[8] = ('ka')
return _odata
if __name__ == '__main__':
print('\n' + time.strftime('%c') + '\nGeneric_Parser.py\n')
main()
print('\n' + time.strftime('%c') + '\nNormal termination.')
Actually the code works, but overwrites columns if, for e.g. more than one transacion date is given in the file. So I need a code that automatically uses the next column for each transaction date. How could this work?
I would be glad if someone have a solution for my problem. Thanks a lot!
Your issue is that you are iterating over the result of
soup.find_all()
and every time you are writing to the same value. You need to do something with
_odata in each iteration, otherwise you will only end up with whatever is written to it the last time.
If you can show us what the data you're trying to parse actually looks like, perhaps we could give a more specific answer.
I have data which is being accessed via http request and is sent back by the server in a comma separated format, I have the following code :
site= 'www.example.com'
hdr = {'User-Agent': 'Mozilla/5.0'}
req = urllib2.Request(site,headers=hdr)
page = urllib2.urlopen(req)
soup = BeautifulSoup(page)
soup = soup.get_text()
text=str(soup)
The content of text is as follows:
april,2,5,7
may,3,5,8
june,4,7,3
july,5,6,9
How can I save this data into a CSV file.
I know I can do something along the lines of the following to iterate line by line:
import StringIO
s = StringIO.StringIO(text)
for line in s:
But i'm unsure how to now properly write each line to CSV
EDIT---> Thanks for the feedback as suggested the solution was rather simple and can be seen below.
Solution:
import StringIO
s = StringIO.StringIO(text)
with open('fileName.csv', 'w') as f:
for line in s:
f.write(line)
General way:
##text=List of strings to be written to file
with open('csvfile.csv','wb') as file:
for line in text:
file.write(line)
file.write('\n')
OR
Using CSV writer :
import csv
with open(<path to output_csv>, "wb") as csv_file:
writer = csv.writer(csv_file, delimiter=',')
for line in data:
writer.writerow(line)
OR
Simplest way:
f = open('csvfile.csv','w')
f.write('hi there\n') #Give your csv text here.
## Python will convert \n to os.linesep
f.close()
You could just write to the file as you would write any normal file.
with open('csvfile.csv','wb') as file:
for l in text:
file.write(l)
file.write('\n')
If just in case, it is a list of lists, you could directly use built-in csv module
import csv
with open("csvfile.csv", "wb") as file:
writer = csv.writer(file)
writer.writerows(text)
I would simply write each line to a file, since it's already in a CSV format:
write_file = "output.csv"
with open(write_file, "wt", encoding="utf-8") as output:
for line in text:
output.write(line + '\n')
I can't recall how to write lines with line-breaks at the moment, though :p
Also, you might like to take a look at this answer about write(), writelines(), and '\n'.
To complement the previous answers, I whipped up a quick class to write to CSV files. It makes it easier to manage and close open files and achieve consistency and cleaner code if you have to deal with multiple files.
class CSVWriter():
filename = None
fp = None
writer = None
def __init__(self, filename):
self.filename = filename
self.fp = open(self.filename, 'w', encoding='utf8')
self.writer = csv.writer(self.fp, delimiter=';', quotechar='"', quoting=csv.QUOTE_ALL, lineterminator='\n')
def close(self):
self.fp.close()
def write(self, elems):
self.writer.writerow(elems)
def size(self):
return os.path.getsize(self.filename)
def fname(self):
return self.filename
Example usage:
mycsv = CSVWriter('/tmp/test.csv')
mycsv.write((12,'green','apples'))
mycsv.write((7,'yellow','bananas'))
mycsv.close()
print("Written %d bytes to %s" % (mycsv.size(), mycsv.fname()))
Have fun
What about this:
with open("your_csv_file.csv", "w") as f:
f.write("\n".join(text))
str.join() Return a string which is the concatenation of the strings in iterable.
The separator between elements is
the string providing this method.
In my situation...
with open('UPRN.csv', 'w', newline='') as out_file:
writer = csv.writer(out_file)
writer.writerow(('Name', 'UPRN','ADMIN_AREA','TOWN','STREET','NAME_NUMBER'))
writer.writerows(lines)
you need to include the newline option in the open attribute and it will work
https://www.programiz.com/python-programming/writing-csv-files
I am working on setting up some usable data for semantic analysis. I have a corpus of raw text data that I am iterating over. I open the data, read it as a string, split into a list, and prepare the data to be built into a dataset in a later function. However, when I build the dataset, my most common words end up being punctuation. I need to remove all punctuation from the list before I process the data further.
import os
import collections
import string
import sys
import tensorflow as tf
import numpy as np
from six.moves import xrange
totalvocab = []
#Loop for: loop through all files in 'Data' directory
for subdir, dirs, files in os.walk('Data'):
for file in files:
filepath = subdir + os.sep + file
print(filepath)
#Function for: open file, convert input to string, split into list
def read_data(filepath):
with open(filepath, 'r') as f:
data = tf.compat.as_str(f.read()).split()
return data
#Run function on data, add file data to full data set.
filevocab = read_data(filepath)
totalvocab.extend(filevocab)
filevocab_size = len(filevocab)
print('File vocabulary size: %s' % filevocab_size)
totalvocab_size = len(totalvocab)
print('Total vocabulary size: %s' % totalvocab_size)
If I do the following:
def read_data(filepath):
with open(filepath, 'r') as f:
data = tf.compat.as_str(f.read())
data.translate(string.punctuation)
data.split()
return data
The words are split into individual letters.
Any other methods I have attempted have errored out.
There are a couple of errors in the code:
str.split() and str.translate() do not modify in place.
str.translate() expects a mapping.
To fix:
def read_data(filepath):
with open(filepath, 'r') as f:
data = tf.compat.as_str(f.read())
data = data.translate(str.maketrans('', '', string.punctuation))
return data.split()
Removing punctuation, may or may not do what you want, e.g. hyphenated words will become concatenated. You could alternatively identify punctuation that you would replace with a space.