I'm trying to test the Azure Data Warehouse. I successfully created and connected to the database, but I've run into a snag as I attempt to load the tables. I'm trying to execute the following instructions:
To install AdventureWorksSQLDW2012:
-----------------------------------
4. Extract files from AdventureWorksSQLDW2012.zip file into a directory.
5. Edit aw_create.bat setting the following variables:
a. server=<servername> from step 1. e.g. mylogicalserver.database.windows.net
b. user=<username> from step 1 or another user with proper permissions
c. password=<passwordname> for user in step 5b
d. database=<database> created in step 1
e. schema=<schema> this schema will be created if it does not yet exist
6. Run aw_create.bat from a cmd prompt, running from the directory where the files were unzipped to.
This script will...
a. Drop any Adventure Works tables or views that already exist in the schema
b. Create the Adventure Works tables and views in the schema specified
c. Load each table using bcp
d. Validate the row counts for each table
e. Collect statistics on every column for each table
I completed the prerequisites of installing bcp and sqlcmd and used the -? command to confirm the installations.
Unfortunately, when I try to complete step 6 above I get the following error:
REM AdventureWorksSQLDW2012 sample database version 3.0 for DW Service Tue 06/27/2017 20:31:01.99 Bcp must be installed.
Has anyone else come across this error or can anyone suggest a potential solution.
UPDATE: I've also copied the path where BCP is located to my path environment variables. Still no luck.
The aw_create.bat contains a line where you need to provide the path of the bcp program. Once provided ans save the script worked like a charm.
Related
I am currently trying to parse some invoice data and came across this package on PyPi. It seems to be very handy for this task. There is one problem I can't run it due to 'no template found for ' error. From the documentation [https://pypi.org/project/invoice2data/0.2.31/#description]
it becomes clear that you need to specify a template and then run the following command: invoice2data <invoice_file>.pdf. To specify a template you need to run the command invoice2data --template-folder <yourfolder>.
When executing these commands in my wsl (Linux virtual machine, needed to run package) it keeps complaining. My invoice files are in the map 'tpl' (2 custom yml files), and the invoice file is called invoice.pdf see screenshots. I have attached the invoices too for clearity, please not these are all testfiles. My aim is to first make sure invoice2data operates, and then make my own custom YML template conforming the tutorial.
Somehow invoice2data does not get, it needs to assign a template ( I really don't care about which template at this stage) to execute the parsing. I have looked everywhere on google, there are topics on these, but none offer me a solution. I hope somebody can help me out. Much appreciated, tnx a lot in advance
Jeffrey
1. files and directories
2. YML template files
3. Command line execution gives error
4a. invoice sample 1
4b. invoice sample 2
There are many queue_promotion_n tables where n is from 1 to 100.
There is an error on the 73 table with a fairly simple query
SELECT count(DISTINCT queue_id)
FROM "queue_promotion_73"
WHERE status_new > NOW() - interval '3 days';
ERROR: could not open file "base/16387/357386324.1" (target block
200005): No such file or directory
Uptime DB 23 days. How to fix it?
Check that you have up-to-date backups (or verify that your DB replica is in sync)
PostgreSQL wiki recommends stopping DB and rsync whole all PostgreSQL files to a safe location.
File where the table is physically stored seems to be missing. You can check where PostgreSQL stores data on disk using:
SELECT pg_relation_filepath('queue_promotion_73');
pg_relation_filepath
----------------------
base/16387/357386324
(1 row)
If you are sure that your hard drives/RAID controller works fine, you can try rebuilding the table. It is a good idea to try this on a replica or backup snapshot of the database first.
VACUUM FULL queue_promotion_73;
Check again the relation path:
SELECT pg_relation_filepath('queue_promotion_73');
it should be different and hopefully with all required files.
The cause could be related to a hardware issue, make sure to check DB consistency.
I've an SSIS package to load excel file from network drive. It's designed to load content and then move the file to archived folder.
Everything works good when the following SQL statement runs in SSMS window.
However when it's copied to SQL agent job and executes from there, the file is neither loaded nor moved. But it shows "successful" from the agent log.
The same thing also happened to "SSIS job" instead of T-SQL job, even with proxy of windows account.(same account as ssms login)
Declare #execution_id bigint
EXEC [SSISDB].[catalog].[create_execution] #package_name=N'SG_Excel.dtsx', #execution_id=#execution_id OUTPUT, #folder_name=N'ETL', #project_name=N'Report', #use32bitruntime=True, #reference_id=Null
Select #execution_id
DECLARE #var0 smallint = 1
EXEC [SSISDB].[catalog].[set_execution_parameter_value] #execution_id, #object_type=50, #parameter_name=N'LOGGING_LEVEL', #parameter_value=#var0
EXEC [SSISDB].[catalog].[start_execution] #execution_id
GO
P.S. At first relative path of network drive is applied, then switched to absolute path(\\server\folder). It's not solving the issue.
SSIS Package Jobs run under the context of the SQL Server Agent. What Account is setup to run the SQL Server Agent on the SQL Server? It may need to be run as a Domain account that has access to the network share.
Or you can copy the Excel file to local folder on the SQL Server, so the Package can access the file there.
Personally I avoid the File System Task - I have found it unreliable. I would replace that with a Script Task, and use .NET methods from the System.IO namespace e.g. File.Move. These are way more reliable and have mature error handling.
Here's a starting point for the System.IO namespace:
https://msdn.microsoft.com/en-us/library/ms404278.aspx
Be sure to select the relevant .NET version using the Other Versions link.
When I have seen things like this in the past it's been that my package isn't accessing the path I thought it was at run time, its looking somewhere else, finding an empty folder & exiting with success.
SSIS can have a nasty habit of going back to variable defaults . It may be looking at a different path you used in dev? Maybe hard code all path values as a test? or put in break points & double check the run time values of all variables & parameters.
Other long shots may be:
Name resolution, are you sure the network name is resolving correctly at runtime?
32/64 bit issues. Dev tends to run 32 bit, live may be 64 bit. May interfere with file paths? Maybe force to 32 bit at run time?
There is issue with sql statement not having statement terminator (;) that is causing issue.
Declare #execution_id bigint ;
EXEC [SSISDB].[catalog].[create_execution] #package_name=N'SG_Excel.dtsx', #execution_id=#execution_id OUTPUT, #folder_name=N'ETL', #project_name=N'Report', #use32bitruntime=True, #reference_id=Null ;
Select #execution_id ;
DECLARE #var0 smallint = 1 ;
EXEC [SSISDB].[catalog].[set_execution_parameter_value] #execution_id, #object_type=50, #parameter_name=N'LOGGING_LEVEL', #parameter_value=#var0 ;
EXEC [SSISDB].[catalog].[start_execution] #execution_id ;
GO
I have faced similar issue in service broker ..
I wish to write to Excel on my PC a "big" matrix of p rows and c columns, e.g.
3,000 rows and 20 columns. But it's not easy, and I'm wondering if I can simplify it by using a fixed number for rows and columns instead of:
array mat {&periods,&columns};
Right now, I'm on the free version of SAS called "SAS University Edition", which has only community help.
I would like to output it to Excel, but using VMWARE on a PC to get SAS Studio to run, you can't write directly to disk (although there is a myfolder).
I tried this, but got this error log:
proc export data=WORK.CPAPMONTE1
file= "/folders/myfolders/outfile1.xlsx"
DBMS=xlsx
;
run;
ERROR: XLSX file can not be created -> /folders/myfolders//outfile1.xlsx. Make sure the path name is correct and that you have
write permission.
ERROR: Too many variables for the output file
I figure that the 2nd error is just due to the first error, which has a // instead of a /
I have defined a special folder for my data in SAS University Edition as:
/folders/myfolders/CPAP1
but I haven't figured out how to point to there
You can write directly to disk, you need to set up a shared folder similar to myfolders and then you reference its as
/folders/myshortcuts/myname
The folder and shortcut must be exactly correct, and all need to be lower case as it's case sensitive. If you have myfolders set up, all you need to do is right click on the folder>Properties and you'll get the path to the folder. Use that in your export. A similar process can be used for the custom shared folder you set up.
SAS University Edition Help Center/FAQ
https://support.sas.com/software/products/university-edition/faq/main.htm
Your specific question - How do I create a folder shortcut to my existing SAS files?
https://support.sas.com/software/products/university-edition/faq/shared_folder_access_existing.htm
We have several databases, say DB1, DB2, DB3 etc.
They have to have identical code base, so we use a DB project in Visual Studio 2012 and generate a SQL script for deployment based on comparison between the project and UAT/Prod DB1. Then this script is applied to DB1-DBn.
For the very first time in the history of this DB project I had to create a function that contained a hardcoded database name, example:
inner join DB1.schema.table1 as t1 on
And now the project cannot be built or comparison cannot be updated or script generated (Update and Generate Script buttons disabled) due to a number of errors pertaining to that database reference, as VS seems to believe that DB1 does not exist.
I tried to add a project level SQLCMD variable $(DB) and set it to DB1 default value and use it as
inner join [$(DB1)].schema.table1 as t1 on
to work around the errors, but it did not seem to make any difference.
Edit:
A suggestion was made to add a circular project reference to itself and assign to it the same variable I was trying to add manually, not sure how to accomplish that.
As per this article the reference should be added to a manually extracted .dacpac file as follows:
Extracted .dacpac file from the targed DB with the following command:
"C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\Extensions\Microsoft\SQLDB\DAC\120\sqlpackage.exe" /SourcePassword:p /SourceUser:u /Action:Extract /ssn:192.168.2.1 /sdn:DB1 /tf:DB1.dacpac
Included that as a database reference. It automatically assigned the correct SQLCMD variable name and the error disappeared.
From the source control point, even though when adding a database reference to a .dacpac file automatically creates a SQLCMD variable, it does not add the file to the project. The .dacpac file used still has to be added to the project as an existing item, which is kind of lame. Doing that in the solution explorer I encountered an error and had to do that through the team explorer instead, where that worked.