I need to export all my users with their webform submitted data to excel file.I can export users, but how do this with related webforms I dont now. Please, help me.
How about this:
select CONCAT(GROUP_CONCAT(CONCAT('"', sd.data, '"' )), ', "',u.uid, '","', u.name, '", "', u.mail,'"') from webform_submitted_data sd JOIN webform_submissions s ON s.sid = sd.sid JOIN users u ON u.uid = s.uid GROUP by s.sid LIMIT 1 INTO OUTFILE '/Users/nandersen/Downloads/users.csv' FIELDS TERMINATED BY '' ENCLOSED BY '' LINES TERMINATED BY '\n';
You need to group concatenate the webform data, which is multiple rows but one column with the user data which is in mutiple columns but one row. So by concatenating the data separately and grouping by the submission id, you can get the data you need.
Will output something like this:
"Nate","Andersen","nate#test.com","123 Atlanta Avenue","Nederland","Texas","12345","4095496504","safe_key4","safe_key6","safe_key2","09/07/1989", "69","oknate", "nate#test.com"
Related
I have had to look up hundreds (if not thousands) of free-text answers on google, making notes in Excel along the way and inserting SAS-code around the answers as a last step.
The output looks like this:
This output contains an unnecessary number of blank spaces, which seems to confuse SAS's search to the point where the observations can't be properly located.
It works if I manually erase superflous spaces, but that will probably take hours. Is there an automated fix for this, either in SAS or in excel?
I tried using the STRIP-function, to no avail:
else if R_res_ort_txt=strip(" arild ") and R_kom_lan=strip(" skåne ") then R_kommun=strip(" Höganäs " );
If you want to generate a string like:
if R_res_ort_txt="arild" and R_kom_lan="skåne" then R_kommun="Höganäs";
from three variables, let's call them A B C, then just use code like:
string=catx(' ','if R_res_ort_txt=',quote(trim(A))
,'and R_kom_lan=',quote(trim(B))
,'then R_kommun=',quote(trim(C)),';') ;
Or if you are just writing that string to a file just use this PUT statement syntax.
put 'if R_res_ort_txt=' A :$quote. 'and R_kom_lan=' B :$quote.
'then R_kommun=' C :$quote. ';' ;
A saner solution would be to continue using the free-text answers as data and perform your matching criteria for transformations with a left join.
proc import out=answers datafile='my-free-text-answers.xlsx';
data have;
attrib R_res_ort_txt R_kom_lan length=$100;
input R_res_ort_txt ...;
datalines4;
... whatever all those transforms will be performed on...
;;;;
proc sql;
create table want as
select
have.* ,
answers.R_kommun_answer as R_kommun
from
have
left join
answers
on
have.R_res_ort_txt = answers.res_ort_answer
& have.R_kom_lan = abswers.kom_lan_answer
;
I solved this by adding quotes in excel using the flash fill function:
https://www.youtube.com/watch?v=nE65QeDoepc
I have two sets of similar codes that gives different output. The first example does not return any output but the second example returns a output using the same search input.
First Example:
sql = "SELECT accessionID, title, ISBN, publisher, publicationYear FROM Books WHERE %s LIKE %s";
cursor.execute(sql,(col, "%" + values + "%",))
Second Example:
sql = "SELECT accessionID, title, ISBN, publisher, publicationYear FROM Books WHERE title LIKE %s";
cursor.execute(sql,("%" + values + "%",))
The codes that I am trying to code out is that WHERE is dynamic that depends on which text field user searches on. For example, if a user searches something on the title text box, it will only look into Title.
Another way I could think of is to use If conditions to hardcode, but it only works for the first If conditions and subsequent one does not work.
My question is how to make the SQL line dynamic (using first example) in the sense that I can do two %s in the SQL query line and still get the same output?
I have a huge dataset which I need to import from Excel into Access (~800k lines). However, I can ignore lines with a particular column value, which make up like 90% of the actual dataset. So in fact, I only really need like 10% of the lines imported.
In the past I've been importing Excel Files line-by-line in the following manner (pseudo code):
For i = 1 To EOF
sql = "Insert Into [Table] (Column1, Column2) VALUES ('" & _
xlSheet.Cells(i, 1).Value & " ', '" & _
xlSheet.Cells(i, 2).Value & "');"
Next i
DoCmd.RunSQL sql
With ~800k lines this takes waaay to long as for every single line a query would be created and run.
Considering the fact that I can also ignore 90% of the lines, what is the fastest approach to import the dataset from Excel to Access?
I was thinking of creating a temporary excel file with a filter activated. And then I just import the filtered excel.
But is there a better/faster approach than this? Also, what is the fastest way to import an excel via vba access?
Thanks in advance.
Consider running a special Access query for the import. Add the below SQL into an Access query window or as SQL query in a DAO/ADO connection. Include any WHERE clauses which requires named column headers, right now set to HDR:No:
INSERT INTO [Table] (Column1, Column2)
SELECT *
FROM [Excel 12.0 Xml;HDR=No;Database=C:\Path\To\Workbook.xlsx].[SHEET1$];
Alternatively, run a Make-Table query in case you need a staging temp table (to remove 90% of lines) prior to final table but do note this query replaces table if exists:
SELECT * INTO [NewTable]
FROM [Excel 12.0 Xml;HDR=No;Database=C:\Path\To\Workbook.xlsx].[SHEET1$];
A slight change in your code will do the filtering for you:
Dim strTest As String
For i = 1 To EOF
strTest=xlSheet.Cells(i, 1).Value
if Nz(strTest)<>"" Then
sql = "Insert Into [Table] (Column1, Column2) VALUES ('" & _
strTest & " ', '" & _
xlSheet.Cells(i, 2).Value & "');"
DoCmd.RunSQL sql
End If
Next i
I assume having the RunSQL outside the loop was just a mistake in your pseudocode. This tests for the Cell in the first column to be empty but you can substitute with any condition is appropriate for your situation.
I'm a little late to the party but I stumbled on this looking for information on a similar problem. I thought I might share my solution in case it could help others or maybe OP, if he/she is still working on it. Here's my problem and what I did:
I have an established Access database of approximately the same number of rows as OPs (6 columns, approx 850k rows). We receive a .xlsx file with one sheet and the data in the same structure as the db about once a week from a partner company.
This file contains the entire db, plus updates (new records and changes to old records, no deletions). The first column contains a unique identifier for each row. The Access db is updated when we receive the file through similar queries as suggested by Parfait, but since it's the entire 850k+ records, this takes 10-15 minutes or longer to compare and update, depending on what else we have going on.
Since it would be faster to load just the changes into the current Access db, I needed to produce a delta file (preferably .txt that can be opened with excel and saved as .xlsx if needed). I assume this is something similar to what OP was looking for. To do this I wrote a small application in c++ to compare the file from the previous week, to the one from the current week. The data itself is an amalgam of character and numerical data that I will just call string1 through string6 here for simplicity. It looks like this:
Col1 Col2 Col3 Col4 Col5 Col6
string1 string2 string3 string4 string5 string6
.......
''''Through 850k rows''''
After saving both .xlsx files as .txt tab delimited files, they look like this:
Col1\tCol2\tCol3\tCol4\tCol5\tCol6\n
string1\tstring2\tstring3\tstring4\tstring5\tstring6\n
....
//Through 850k rows//
The fun part! I took the old .txt file and stored it as a hash table (using the c++ unordered_map from the standard library). Then with an input filestream from the new .txt file I used Col1 in the new file as a key to the hash table and output any differences to two different files. One you could use a query to append the db with new data and the other you could use to update data that has changed.
I've heard it's possible to create a more efficient hash table than the unordered_map but at the moment, this works well so I'll stick with it. Here's my code.
#include <iostream>
#include <fstream>
#include <string>
#include <iterator>
#include <unordered_map>
int main()
{
using namespace std;
//variables
const string myInFile1{"OldFile.txt"};
const string myInFile2{"NewFile.txt"};
string mappedData;
string key;
//hash table objects
unordered_map<string, string> hashMap;
unordered_map<string, string>::iterator cursor;
//input files
ifstream fin1;
ifstream fin2;
fin1.open(myInFile1);
fin2.open(myInFile2);
//output files
ofstream fout1;
ofstream fout2;
fout1.open("For Updated.txt"); //updating old records
fout2.open("For Upload.txt"); //uploading new records
//This loop takes the original input file (i.e.; what is in the database already)
//and hashes the entire file using the Col1 data as a key. On my system this takes
//approximately 2 seconds for 850k+ rows with 6 columns
while(fin1)
{
getline(fin1, key, '\t'); //get the first column
getline(fin1, mappedData, '\n'); //get the other 5 columns
hashMap[key] = mappedData; //store the data in the hash table
}
fin1.close();
//output file headings
fout1 << "COl1\t" << "COl2\t" << "COl3\t" << "COl4\t" << "COl5\t" << "COl6\n";
fout2 << "COl1\t" << "COl2\t" << "COl3\t" << "COl4\t" << "COl5\t" << "COl6\n";
//This loop takes the second input file and reads each line, first up to the
//first tab delimiter and stores it as "key", then up to the new line character
//storing it as "mappedData" and then uses the value of key to search the hash table
//If the key is not found in the hash table, a new record is created in the upload
//output file. If it is found, the mappedData from the file is compared to that of
//the hash table and if different, the updated record is sent to the update output
//file. I realize that while(fin2) is not the optimal syntax for this loop but I
//have included a check to see if the key is empty (eof) after retrieving
//the current line from the input file. YMMV on the time here depending on how many
//records are added or updated (1000 records takes about another 5 seconds on my system)
while(fin2)
{
getline(fin2, key, '\t'); //get key from Col1 in the input file
getline(fin2, mappedData, '\n'); //get the mappeData (Col2-Col6)
if(key.empty()) //exit the file read if key is empty
break;
cursor = hashMap.find(key); //assign the iterator to the hash table at key
if(cursor != hashMap.end()) //check to see if key in hash table
{
if(cursor->second != mappedData) //compare mappedData
{
fout2 << key << "\t" << mappedData<< "\n";
}
}
else //for updating old records
{
fout1 << key << "\t" << mappedData<< "\n";
}
}
fin2.close();
fout1.close();
fout2.close();
return 0;
}
There are a few things I am working on to make this an easy to use executable file (for example reading the xml structure of the excel.zip file for direct reading or maybe using an ODBC connection) but for now, I'm just testing it to make sure the outputs are correct. Of course the output files would then have to be loaded into the access database using queries similar to what Parfait suggested. Also, I'm not sure if Excel or Access VBA have a library to build hash tables but it might be worth exploring further if it saves time in accessing the excel data. Any criticisms or suggestions are welcomed.
I want to fetch data & save it to Excel. Web data is the following format:
Kawal store Rate this
Wz5a Delhi - 110018 | View Map
Call: XXXXXXXXXX
Distance : Less than 5 KM
Also See : Grocery Stores
Edit this
Photos
I want to save only the bold fields in following format:
COLUMN1 COLUMN2 COLUMN3
Single search page contains different data formats; for example sometimes PHOTOS is not there.
Sample URL: http://www.justdial.com/Delhi/Grocery-Stores-%3Cnear%3E-Ramesh-Nagar/ct-70444/page-10
Page number can be changed to get other data in the series while keeping other URL same
While generating an Actual excel file might be a heavy task, generating is CSV (Comma Seperated Values) is a much easier task, and CSV files are associated with all of the excel similar applications on both Windws, Mac, and all Linux distributions.
If you must insist on using Excel, here is a ready-made PHP utility for that.
Otherwise you can use the integrated fputcsv php function:
<?php
if(isset($_GET['export']) && $_GET['export'] == 'csv'){
header( 'Content-Type: text/csv' );
header( 'Content-Disposition: attachment;filename='.$filename);
$fp = fopen('php://output', 'w');
$myData = array (
array('1234', '567', 'efed', 'ddd'),//row1
array('123', '456', '789'), //row2
array('"aaa"', '"bbb"') //row3
);
foreach($myData as $row){
fputcsv($fp, $fields);
}
fclose($fp);
}
?>
It's as easy is that, the following lines will output a csv file with rows and column, identical to the structure of the myData array, so basically you can replace it's content with whatever you'd like, even a result set from a DB.
On more information about the CSV standard
I need to import certain information from an Excel file into an Access DB and in order to do this, I am using DAO.
The user gets the excel source file from a system, he does not need to directly interact with it. This source file has 10 columns and I would need to retrieve only certain records from it.
I am using this to retrieve all the records:
Set destinationFile = CurrentDb
Set dbtmp = OpenDatabase(sourceFile, False, True, "Excel 8.0;")
DoEvents
Set rs = dbtmp.OpenRecordset("SELECT * FROM [EEX_Avail_Cap_ALL_DEU_D1_S_Y1$A1:J65536]")
My problem comes when I want to retrieve only certain records using a WHERE clause. The name of the field where I want to apply the clause is 'Date (UCT)' (remember that the user gets this source file from another system) and I can not get the WHERE clause to work on it. If I apply the WHERE clause on another field, whose name does not have ( ) or spaces, then it works. Example:
Set rs = dbtmp.OpenRecordset("SELECT * FROM [EEX_Avail_Cap_ALL_DEU_D1_S_Y1$A1:J65536] WHERE Other = 12925")
The previous instruction will retrieve only the number of records where the field Other has the value 12925.
Could anyone please tell me how can I achieve the same result but with a field name that has spaces and parenthesis i.e. 'Date (UCT)' ?
Thank you very much.
Octavio
Try enclosing the field name in square brackets:
SELECT * FROM [EEX_Avail_Cap_ALL_DEU_D1_S_Y1$A1:J65536] WHERE [Date (UCT)] = 12925
or if it's a date we are looking for:
SELECT * FROM [EEX_Avail_Cap_ALL_DEU_D1_S_Y1$A1:J65536] WHERE [Date (UCT)] = #02/14/13#;
To use date literal you must enclose it in # characters and write the date in MM/DD/YY format regardless of any regional settings on your machine