When I am trying to access a Vsam Sequential dataset(which is also opened in CICS) from batch, I use EXTEND mode to open the file and append some data to it.
Earlier it was working fine. All of a sudden , it is not working now and I am getting File status : 93 error code which means "Resource not available".
OPEN EXTEND <filename>
Foe KSDS datasets I have used EXCI(external CICS Interface) calls to access from batch even though it was opened in Online.
But I do not know how to do the same for ESDS.
Could someone help me to resolve this error.
Related
I am facing this problem where when I try to read the file directly from shared drive it's throwing invalid path error. Trying to explain the situation below:
The data files in the form of .xlsx and .xlsb is copied to the sharepoint, which works as the source.
I used 'open in explorer' function from sharepoint and got the drive address.
Mapped the path after opening in explorer with my network drive, and added as p drive.
Now i am using this path to read the file directly using pandas read_excel.
it is throwing invalid path OS22 error
Issues :
When i am reading .xlsx file which is smaller in size 15MB, it is working well.
Trying to read another excel file 150 MB in size, getting invalid path error.
Same is happening when reading .xlsb binary files.
Already tried forward and back slashes, same error.
used open to read the file, got same invalid path error.
Though if i download the same file to local, it is working without any issue. Easily able to read the files, with same codes.
Any suggestion?
I am having issue with the SSIS package, by Running from BIDS I could export 400K records successfully, But when I tried to run from the Job the package ran successfully but the excel file is empty.
The user which I am running the package with having the full access to the C:\Users folders. and I see it saving the data into the temporary folder but not writing that data into the file and finish with empty file.
For example : 230000 records (works good)
Create the excel file
Load the temporary data
Write data into the file
close the file
330000 records (not working)
Create the excel file
Load the temporary data
Write data into the file xxxxxxx this line missing from the process monitor
close the file
Solution : give permission to the user executing the package to the folder C:\Users\Default doesn't work for me.
Please help!
Sorry for bugging you guys, Found the problem. There was just 1.6GB of disk space on the server, thought the file is taking just 200MB of space but generate lots of temporary files causing the disk full error. Strange that SSIS package ran successfully without giving any warning or error. Thanks for looking into it.
I was simply trying to generate a summary that would show the run_metadata as follows:
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
summary = sess.run([x, y], options=run_options, run_metadata=run_metadata)
train_writer.add_run_metadata(paths.logs, 'step%d' % step)
train_writer.add_summary(paths.logs, step)
I made sure the path to the logs folder exists, this is confirmed by the fact the the summary file is generated but no metadata is presetn. Now I am not sure a file is actually generated to be honest (for the metadata), but when I open tensorboard, the graph looks fine and the session runs dropdown menu is populated. Now when I select any of the runs it shows a progress bar "Parsing metadata.pbtxt" that stops and hangs right half way through.
This prevents me from gathering any more additional info about my graph. Am I missing something ? A similar issue happened when trying to run this tutorial locally (MNIST summary tutorial). I feel like I am missing something simple. Does anyone have an idea about what could cause this issue ? Why would my tensorboard hang when trying to load a session run data ?
I can't believe I made it work right after posting the question but here it goes. I noticed that this line:
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
was giving me an error so I removed the params and turned it into
run_options = tf.RunOptions()
without realizing that this is what caused the metadata not to be parsed. Once I researched the error message:
Couldn't open CUDA library cupti64_90.dll
I looked into this Github Thread and moved the file into the bin folder. After that I ran again my code with the trace_level param, had no errors and the metadata was successfully parsed.
I am new to MS-Access.
The title is the error I get when I try to import an excel sheet into a new table in Access 2016. Note the single empty quote is part of the error message.
I've tried reinstalling, playing around with import options, importing from a CSV, CSV with different encodings, checked the table in excel for errors or inconsistencies.
I have searched and searched without luck. Help would be appreciated.
ADDENDUM:
The CSV I've tried to import is:
CashAccountID,AccountDescription,BankName,BankAccountNumber
301,Primary Checking Account,MegaBank,9017765453
302,Money Market Account,Wells Gargle,3831157490
303,Payroll Account,MegaBank,9835320050
I've encountered the same error and, from trial and error, it appears the issue is related to the size of the Excel file you're importing from. I've had success by splitting the 70MB Excel file into two 35MB files before doing the same import into Excel.
The error message from MS Access is nonsensical - the problem occurs when we're not using an import/export specification at all (and nor are there any saved in the Access I'm running). I think we can put this failure and erroneous error message down as an MS Access bug.
I'm following this tutorial http://azure.microsoft.com/en-us/documentation/articles/hdinsight-use-hive/ but have become stuck when changing the source of the query to use a file.
It all works happily when using New-AzureHDInsightHiveJobDefinition -Query $queryString but when I try New-AzureHDInsightHiveJobDefinition -File "/example.hql" with example.hql stored in the "root" of the blob container I get ExitCode 40000 and the following in standarderror:
Logging initialized using configuration in file:/C:/apps/dist/hive-0.11.0.1.3.7.1-01293/conf/hive-log4j.properties
FAILED: ParseException line 1:0 character 'Ã?' not supported here
line 1:1 character '»' not supported here
line 1:2 character '¿' not supported here
Even when I deliberately misspell the hql filename the above error is still generated along with the expected file not found error so it's not the content of the hql that's causing the error.
I have not been able to find the hive-log4j.properties in the blob store to see if it's corrupt, I have torn down the HDInsight cluster and deleted the associated blob store and started again but ended up with the same result.
Would really appreciate some help!
I am able to induce a similar error by putting a Utf-8 or Unicode encoded .hql file into blob storage and attempting to run it. Try saving your example.hql file as 'ANSI' in Notepad (Open, the Save As and the encoding option is at the bottom of the dialog) and then copy it to blob storage and try again.
If the file is not found on Start-AzureHDInsightJob, then that cmdlet errors out and does not return a new AzureHDInsightJob object. If you had a previous instance of the result saved, then the subsequent Wait-AzureHDInsightJob and Get-AzureHDInsightJobOutput would be referring to a previous run, giving the illusion of the same error for the not found case. That error should definitely indicate a problem reading an UTF-8 or Unicode file when one is not expected.