I want to load Thermocycle library by OpenModelica connect edition. But I get a message "The file was not encoded in UTF-8"
To fix this problem I should: "add a file package.encoding at the top-level." But I don't understand what must I do? What is the file which called "package.encoding", what should this file consist from? Where should I insert it?
The error message says it all. "add a file package.encoding at the top-level."
Put the file where your library's package.mo is located.
The file must contain the name of encoding used by the library.
Note that you can also use OMEdit's encoding conversion feature. File->Open/Convert Modelica File(s) With Encoding
Related
I'm transferring the file which has the content like below from mainframe system to a Unix instance. I've a delimiter in the file as ^&*. I'm sending the same in mainframe but when we receive the file in the unix we're receiving as Ø&*.
I'm using connect direct to transfer the file from one system to another.
File Type: Flat File, File transfer: CD (Connect Direct)
file content
H^&*20220407^&*160009^&*2006
T^&*1
But when I receive the file in the unix server I can the file content is changed. Mainly ^ is converted to Ø.
HØ&*20220407Ø&*160009Ø&*2006
TØ&*1
This is most surely a code page problem.
The data in the file on the mainframe is (most probably) in some EBCDIC code page. ConnectDirect is doing a code page tranformation when sending the file to that UNIX system. This is what the XLATE(YES) means.
However, there is some default code page "from"-"to" pair configured, which is being used with XLATE(YES). But this probably is not the correct pair. You need to
find out what EBCDIC code page the data on the mainframe is encoded in. Is it IBM-037, IBM-1047, IBM-500, IBM-273, etc. There are many.
find out what code page the data shall be in on the UNIX side: UTF-8, ISO8859-1, 437, etc. There are many.
make sure ConnectDirect will transform using the correct source and target code pages.
Ask your ConnectDirect support people to help you with this.
How can I check if the file uploaded on my server is really an image? Not just a file with (jpg,png,gif) extension just to make it as "an image" file. I created an image compression service using imagemin, but I'm afraid if the uploaded file is really an image.
I have used the mmmagic module for this, it discovers mime types:
mmmagic on Github
Mime types are not useful.
Try magic numbers or purely try to open the file.
Read with link for more details.
https://stackoverflow.com/a/8475542/1979882
http://www.astro.keele.ac.uk/oldusers/rno/Computing/File_magic.html#Image
Another option is https://github.com/sindresorhus/image-type. Looks like mmmagic relies on libmagic which is a C lib and might be a lot to pull in...
I have a problem with mapDB version 1.0.6. When i create a database i end up with two files with the same name but with different file types.
One is for example IRTree with file type FILE and the other is IRTree with file type .p
Having said that, whenever i try to read my database providing a filename IRTree i end up with an exception:
NullPointerException with the command DBMaker.newFileDB(new File(filename)).readOnly().make(); or an IOException: storage header is invalid.
Can anyone explain to me what's going on?
MapDB uses two files. .P file is used to store data. Always open file without extension, otherwise it will try to open incorrect file.
I am curling a website and writing it to .json file; this file is input to my java code which parses it using json library and the necessary data is written back in a CSV file which i later use to store it in a database.
As you know data coming from a website can be in different formats so i make sure that i read and write in UTF-8 format, still i get wrong output.
For example, Østerriksk becomes �sterriksk.
I am doing all this in Linux. I think there is some encoding problem because this same code runs fine in Windows but not in Unix/Linux.
I am quite sure my java code is proper but i am not able to find out what I'm doing wrong.
You're reading the data as ISO 8859-1 but the file is actually UTF-8. I think there's an argument (or setting) to the file reader that should solve that.
Also: curl isn't going to care about the encodings. It's really something in your Java code that's wrong.
What kind of IDE are you using, for example this can happen if you are using Eclipse IDE, and not set your default encoding to utf-8 in properties.
I want to identify the file-format of the input file given to my shell script - whether a .pst or a .dbx file. I checked How to check the extension of a filename in a bash script?. That one deals with txt files and two methods are given there -
check if the extension is txt
check if the mime type is application/text etc.
I tried file -ib <filename> on a .pst and a .dbx file and it showed application/octet-stream for both. However, if I just do file <filename>, then I get
this for the dbx file -
file1.dbx: Microsoft Outlook Express DBX File Message database
and this for the pst file -
file2.pst: Microsoft Outlook binary email folder (Outlook >=2003)
So, my questions are -
is it better to use mime type detection everytime when the output can be anything and we need a proper check?
How to apply mime type check in this case - both returning "application/octet-stream"?
Update
I didn't want to do an extension based detection because it seems we just can't be sure on a Unix system, that a .dbx file truly is a dbx file. Since file <filename> returns a line which contains the correct information of the file (e.g. "Microsoft Outlook Express DBX File Message database"). That means the file command is able to identify the file type properly. Then why does it not get the correct information in file -ib <filename> command?
Will parsing the string output of file <filename> be fine? Is it advisable assuming I only need to identify a narrow set of data storage files of outlook family (MS Outlook Express, MS Office Outlook 2003,2007,2010 etc.). A small text identifier like application/dbx which could be compared would be all I need.
The file command relies on having a file type detection database which includes rules for the file types that you expect to encounter. It may not be possible to recognize these file types if the file content doesn't have a unique code near the beginning of the file.
Note that the -i option to emit mime types actually uses a separate "magic" numbers file to recognize file types rather than translating long descriptions to file types. It is quite possible for these two databases to be out of sync. If your application really needs to recognize these two file types I suggest that you look at the Linux source code for "file" to see how they recognize them and then code this recognition algorithm right into your app.
If you want to do the equivalent of DOS file type detection, then strip the extension off the filename (everything after the last period) and look up that string in your own table where you define the types that you need.