Checking for amount of open files while running systemverilog testbench - verilog

I am running a test to compare output data of the system to a set of golden data generated on a text file, but when the test runs to around 2000 frames +, I start getting an error where is says:
warning-Cannot open file
file could not be opened. Too many open files.
I tried to look for a system task in SystemVerilog to see if there is a task that can show all currently open files while the test is running, but I can't seem to find one. Is there a way for this to be done?

I don't think the IEEE Std 1800-2017 offers any way to directly know how many files are currently open.
However, you could look at the integer value returned by $fopen to see the value of the last file descriptor which was opened. When I open many files, the value increments by one every time a new file is opened. I tried this on 2 simulators.
Keep in mind that there are 2 ways to call $fopen: multichannel descriptor and file descriptor. They return different values.
Perhaps your simulator has a proprietary way to check for the number of open files. Look through your documentation, or contact the vendor for support.

Responding to the root cause issue:
warning-Cannot open file
file could not be opened. Too many open files.
To read and write more files use the system task(s) $readmemh, $readmemb, $writememh, $writememb accessing the contents using arrays. These don't use $fopen().
See the IEEE 1800 2017 SystemVerilog specification, section 21.4 and 21.5
Another alternative: Concatenate files so that total number files to access is less.

Related

(dd command linux) last byte goes to next line

Hi friends I need some help.
We have a tool that convert binary files to text files, and after that stores into Hadoop (HDFS).
In production, that ingestion tool uses ftp to download files from mainframe in binary format (EBCDIC), and we don't have access to donwload files from mainframe in development environment.
In order to test file conversion, we manually create text files, and we are trying to convert file using dd command (linux), using these parameters:
dd if=asciifile.txt of=ebcdicfile conf=ebcdic
After pass through our conversion tool, the expected result is:
000000000000000 DATA
000000000000000 DATA
000000000000000 DATA
000000000000000 DATA
However, it's returning the following result:
000000000000000 DAT
A000000000000000 DA
TA000000000000000 D
ATA000000000000000
I have tried with cbs, obs and ibs parameters, assigning lrec (number of lines of each line) without success.
Can anyone help me?
A few things to consider:
How exactly is the data transferred via FTP? Your "in binary format(EBCDIC)" simply doesn't make any sense at all. The FTP either transfers in binary format, then nothing gets changed, or converted during the transfer. Or the FTP transfers in text mode, aka. ASCII mode, then data is converted from a specific EBCDIC code page to a specific non-EBCDIC code page. You need to know what mode, and if text mode, what are the two code pages being used.
From the man pages for dd, it is unclear what EBCDIC, and ASCII code pages are used for the conversion. I'm just guessing here: EBCDIC code page might be CP-037, and ASCII might be CP-437. If these don't match the ones used in the FTP, the resulting test data is incorrect.
I understand you don't have access to production data in the development environment. However, you should still be able to get test data from the development mainframe using FTP from there. If not, how will you be doing end to end testing?
The EBCDIC conversion is eating your line endings:
https://www.ibm.com/docs/en/zos/2.2.0?topic=server-different-end-line-characters-in-text-files

How to stream log files content that is constantly changing file names in perl?

I a series of applications on Linux systems that I need to basically constantly 'stream' out or even just 'tail' out but the challenge is the filenames are constantly rolling and changing.
The are all date encoded (dates being in different formats) and each then have different incremented formats.
Most of them start with one and increase, but one doesn't have an extension and then adds an extension past the first file and the other increments a number but once hitting 99 rolls to increment a alpha and returns the numeric to 01 and then up again as it rolls so quickly.
I just have the OS level shell scripting, OS command line utilities, and perl available to me to handle this situation for another application to pickup and read these logs.
The new files are always created right when it starts writing to the new file and groups of different logs (some I am reading some I am not) are being written to the same directory so I cannot just pickup anything hitting the directory.
If I simply 'tail -n 1000000 -f |' them today this works fine for the reader application I am using until the file changes and I cannot setup file lists ranges within the reader application, but can pre-process them so they basically appear as a continuous stream to the reader vs. the reader directly invoking commands to read them. A simple Perl log reader like this also work fine for a static filename but not for dynamic ones. It is critical I don't re-process any logs lines and just capture new lines being written to the logs.
I admit I am not any form a Perl guru, and the best answers / clue I've been able to find so far is the use of Perl's Glob function to possibly do this but the examples I've found basically reprocess all of the files on each run then seem to stop.
Example File Names I am dealing with across multiple apps I am trying to handle..
appA_YYMMDD.log
appA_YYMMDD_0001.log
appA_YYMMDD_0002.log
WS01APPB_YYMMDD.log
WS02APPB_YYMMDD.log
WS03AppB_YYMMDD.log
APPCMMDD_A01.log
APPCMMDD_B01.log
YYYYMMDD_001_APPD.log
As denoted above the files do not have the same inode and simply monitoring the directory for change is not possible as a lot of things are written there. On the dev system it has more than 50 logs being written to the directory and thousands of files and I am only trying to retrieve 5. I am seeing if multitail can be made available to try that suggestion but it is not currently available and installing any additional RPMs in the environment is generally a multi-month battle.
ls -i
24792 APPA_180901.log
24805 APPA__180902.log
17011 APPA__180903.log
17072 APPA__180904.log
24644 APPA__180905.log
17081 APPA__180906.log
17115 APPA__180907.log
So really the root of what I am trying to do is simply a continuous stream regardless if the file name changes and not have to run the extract command repeatedly nor have big breaks in the data feed while some script figures out that the file being logged to has changed. I don't need to parse the contents (my other app does that).. Is there an easy way of handling this changing file name?
How about monitoring the log directory for changes with Linux inotify, e.g. Linux::inotify2? Then you could detect when new log files are created, stop reading from the old log file and start reading from the new log file.
Try tailswitch. I created this script to tail log files that are rotated daily and have YYYY-MM-DD on their names. To use this script, you just say:
% tailswitch '*.log'
The quoting prevents the shell from interpreting the glob pattern. The script will perform glob pattern from time to time to switch to a newer file based on its name.

File that normally opens our application, but will fall back to Excel

Our application exports snippets of databases in XLSX format. We wrote our own code on top of System.Packaging as it is many (many!) times faster than using the Excel objects.
Right now we save these files with a .xlsx format, and that works OK. However, it would be much nicer if double-clicking one of these opened our app instead, but failed back to Excel on machines without it.
I know that SpreadsheetML has a feature to do this. If you insert this near the top of the file:
<?mso-application progid=""Excel.Sheet""?>
some sort of magic occurs that causes Excel to open on Win machines. While this might work in SML files, it does not appear to work in "real" xlsx files - I tried adding this line to various parts of the workbook structure but it remained unrecognized.
So is there a similar mechanism we can use in "true" XLSX files generated by System.Packaging? Or some other Windows mechanism we should use in these situations?

IBM Mainframe copy/paste

Disclaimer: I'm new to using Rumba to access IBM Mainframe.
I have currently set up a library for personal use and I have some code that I want to store in a member of this library, how can I copy/paste from a .txt file on my desktop into this program??? As of right now I can successfully copy/paste one line at a time from documents outside of Rumba.
There are various ways. The best one will depend upon the size of the file/amount of data to be transferred.
If it's only a few lines, block copy and paste should work, but you might have to play with Rumba's 'paste' edit settings such as how to handle new lines, etc.
Bigger files can be transferred with the TSO file transfer program indĀ£file (maybe ind$file on your system) which essentially copies a file to the screen and then Rumba 'scrapes' the screen for data to put into a file (this is for a mainframe-to-PC transfer; for going the other way the operation is reversed). This can be surprisingly quick.
Lastly there's FTP - either from the command line or via a program such as WinSCP.
Edit:
Based on your comment that the files are about 300 lines long, I'd look into using Rumba's file-transfer option using the ind$file utility. Once you have the files on one system, speak to your mainframe tech support team about the best way to get them to the other systems.
If you need help uploading the files, then the tech support team should be your first point of call.
What mainframe editor are you running? TSO/ISPF?
I copy and and paste from ".txt" files into ISPS all the time with no problem.
Select the text you want to copy (in the ".txt" file)
Press CTRL-C
Open the mainframe file using ISPF Edit (option 2).
Enter line command "Inn" at the line where where you want the copy to start.
(This inserts "nn" empty lines to receive the copied data. Personally, I usually use "nn"=20)
Position your cursor at the first character of the first empty line.
Press CTRL-V

Append text file with custom footer

Good day,
I am a CNC program not a computer programer. I am using CAM software to make cutting programs for our CNC router. The router is a bit old and can only take files 200-300 kb big. We are doing carvings that require 1-2 megs text files. I am using a program called GSplit ( http://www.gdgsoft.com/gsplit/ ) to divvy up the text file. It generates 10-25+ files with a custom header that our machine can read. All the files are great and it works, but I have to manually add the closing lines/footer to each file. The files that are created and used are normal .txt files but with a specific extension, .ANC.
Is there any way to automate this process of opening each individual file, scrolling to the end and copy/pasting the same 1-2 lines of code? The files are NAME[number].ANC in a contained folder. Would it be possible to just direct to a folder and say "add this 'text' to every file in this folder"?
Thanks for your time.
What OS are you using? Using Unix you can do a simple script on command line. If you are in the directory with the specific files simply execute:
for file in *; do echo "APPEND THIS" >> $file; done
If you are running Windows you should be able to do the same using cygwin (probably you could also use the power shell, but I don't know anything about the that)
I found a program Notepad++ (apparently the last person to find it...). USed the find/replace files option. A regular expression(note sure exactly what these are but I'm sure you guys do) "\s+\z" as to what to look for. It finds the last space or whatever at the end of all the files and then adds the code I need. Easy, free, and I don't need to write any computer code. Thanks for the attempt to help me Dirkk! :)

Resources