Good day for all! I recently started using PWC, but it doesn't look very much like IMB's DStage... So, i have a CSV archive, i need converte all nick-names in new rows! Example:
Many names separated by comma
I need this
I'm try to use a Normalize, Sequences, Expression, Filter and Aggregator... But, I can't do!
If anyone can help me, i'll very glad so! Thank so much!
Use java transformation to do this dynamic looping thing.
In java transformation(JTX) create 3 input(id, name,nickname) and 3 output(o_id, o_name,o_nickname) with correct data type.
Under the Java code tab; Import Package tab Place the below code
import java.util.;
import java.text.;
Under "On InputRow " tab place the following code.
String[] words = nickname.split(",");
len=words.length;
for (int i=0;i<len;i++) //iterate through the array
{
o_id =id
o_name=name
o_nickname= words[i];
generateRow();
}
Save JTX and link output(o_id, o_name,o_nickname) to next transformation.
I have not tested the code so if it fails pls let me know.
*And every tool is a king un till XML, JSON, PDF, TB size table arrives on the battlefield.
Related
I have seen multiple posts on exporting ngx-datatable to csv/xlsx. However, I did not come across any post which says Import Excel file into ngx-datatable which is basically what I need. I need to read an excel file that user uploads and display into ngx-datatable (so basically excel file acting as source for ngx-datatable)
Any guidelines / help links to proceed will be a great help.
If you can transform into an csv file, there is this lib called ngx-csv-parser (https://www.npmjs.com/package/ngx-csv-parser) that helps to format data in object array, the way you need to send to a ngx-datatable. It says it is designed for Angular 13 but has compability with previous versions. I've tested it in Angular 10 and it does work.
It has a setting to use header or no. If you do, you can shape your columns prop from ngx-datatable with the same name of the header.
Example:
Lets say you have a csv file like this:
ColumnA,ColumnB,ColumnC
a,b,c
The output of using this lib (the way it is said in its readme) with header= true will be:
csvRecords = [{ColumnA: a, ColumnB: b, ColumnC: c}]
Lets say you also have an array of columns:
columns = [
{name="A", prop: ColumnA},
{name="B", prop: ColumnB},
{name="C", prop: ColumnC}
]
Then use columns and csvRecords in your html.
<ngx-datatable
class="material"
[rows]="csvRecords"
[columns]="columns"
>
</ngx-datatable>
Your table will be filled with data from your csv.
I'm using Selenium for extracting comments of Youtube.
Everything went well. But when I print comment.text, the output is the last sentence.
I don't know who to save it for further analyze (cleaning and tokenization)
path = "/mnt/c/Users/xxx/chromedriver.exe"
This is the path that I saved and downloaded my chrome
chrome = webdriver.Chrome(path)
url = "https://www.youtube.com/watch?v=WPni755-Krg"
chrome.get(url)
chrome.maximize_window()
scrolldown
sleep = 5
chrome.execute_script('window.scrollTo(0, 500);'
time.sleep(sleep)
chrome.execute_script('window.scrollTo(0, 1080);')
time.sleep(sleep)
text_comment = chrome.find_element_by_xpath('//*[#id="contents"]')
comments = text_comment.find_elements_by_xpath('//*[#id="content-text"]')
comment_ids = []
Try this approach for getting the text of all comments. (the forloop part edited- there was no indention in the previous code.)
for comment in comments:
comment_ids.append(comment.get_attribute('id'))
print(comment.text)
when I print, i can see all the texts here. but how can i open it for further study. Should i always use for loop? I want to tokenize the texts but the output is only last sentence. Is there a way to save this .text file with the whole texts inside it and open it again? I googled it a lot but it wasn't successful.
So it sounds like you're just trying to store these comments to reference later. Your current solution is to append them to a string and use a token to create substrings? I'm not familiar with pythons data structures, but this sounds like a great job for an array or a list depending on how you plan to reference this data.
I have a dictionary of famous people's names sorted by their initials. I want to convert these names into their respective Wikipedia title page names. These are the same for the first three given in this example, but Alexander Bell gets correctly converted to Alexander Graham Bell after running this code.
The algorithm works, although took about an hour to do all the 'AA' names and I am hoping for it to do this all the way up to 'ZZ'.
Is there any optimisation I can do on this? For example I saw something about batch requests but am not sure if it applies to my algorithm.
Or is there a more efficient method that I could use to get this same information?
Thanks.
import wikipedia
PeopleDictionary = {'AA':['Amy Adams', 'Aaron Allston'], 'AB':['Alia Bhatt', 'Alexander Bell']}
for key, val in PeopleDictionary.items():
for val in range(len(PeopleDictionary[key])):
Name_URL_All = wikipedia.search(PeopleDictionary[key][val])
if Name_URL_All:
Name_URL = Name_URL_All[0]
PeopleDictionary[key][val] = Name_URL
My client uses SAS 9.3 running on an AIX (IBM Unix) server. The client interface is SAS Enterprise Guide 5.1.
I ran into this really puzzling problem: when using PROC IMPORT in combination with dbms=xlsx, it seems impossible to filter rows based on the value of a character variable (at least, when we look for an exact match).
With an .xls file, the following import works perfectly well; the expected subset of rows is written to myTable:
proc import out = myTable(where=(myString EQ "ABC"))
datafile ="myfile.xls"
dbms = xls replace;
run;
However, using the same data but this time in an .xlsx file, an empty dataset is created (having the right number of variables and adequate column types).
proc import out = myTable(where=(myString EQ "ABC"))
datafile ="myfile.xlsx"
dbms = xlsx replace;
run;
Moreover, if we exclude the where from the PROC IMPORT, the data is seemingly imported correctly. However, filtering is still not possible. For instance, this will create an empty dataset:
data myFilteredTable;
set myTable;
where myString EQ "ABC";
run;
The following will work, but is obviously not satisfactory:
data myFilteredTable;
set myTable;
where myString LIKE "ABC%";
run;
Also note that:
Using compress or other string functions does not help
Filtering using numerical columns works fine for both xls and xlsx files.
My preferred method to read spreadsheets is to use excel libnames, but this is technically not possible at this time.
I wonder if this is a known issue, I couldn't find anything about it so far. Any help appreciated.
It sounds like your strings have extra values on the end not being picked up by compress. Try using the countc function on MyString to see if any extra characters exist on the end. You can then figure out what characters to remove with compress once they're determined.
I am totally new to Stata and am wondering how to import .xlsx data in Stata. Let's say the data is in the subdirectory Data and has name "a b c.xlsx". So, from working directory, the data is in /Data
I am trying to do
import excel using "\Data\a b c.xlsx", sheet("a")
but it's not working
it's not working
is anything but a useful error report. For future questions, please report the exact error given by Stata.
Let's say the file is in the directory /home/roberto then
clear
set more off
import excel using "/home/roberto/a b c.xlsx"
list
should work.
If you are already in /home/roberto (which you can verify using display c(pwd)), then
import excel using "a b c.xlsx"
should work.
Using backslashes to refer to directories is not encouraged. See Stata tip 65: Beware the backstabbing backslash, by Nick Cox.
See also help cd.