I am trying to understand the working of a parallel crc using look up tables, I could get the basic sarwate code running correctly but I am having a lot of confusion when it comes to appending or prepending zeros.
I am trying to use this code for a parallel crc generation but I am confused on how to divide which part of the input data and append zeros.
Please help, I am really stuck here.
You can see how to combine CRCs computed in parallel by looking at the source code for zlib's crc32_combine() routine. You do not need to prepend or append zeros.
Related
I want to create a simulation in Xcos (part of Scilab) representing the real Arduino Uno system. That means changing its input values during the simulation based on output. The problem is that I need to find the way how to handle strings as input and output. How is this possible?
The solution that comes to my mind is to somehow use Atoms Serial Communication Toolbox functions like writeserial() and readserial() in my Xcos scheme. But I do not have any idea if this is even possible. Any idea?
I managed to use those functions in my Xcos scheme by putting them into Scilab function block (scifunc_block_m) and then parsing them to get the correct output. And it seems that for handling strings in Xcos for input/output it is possible to convert string to ascii() and work with that.
I'm working on a project for school.
The assignment is as follows:
Implement a sorting algorithm of your choosing in assembly (we are using the GNU Assembler). The input is a text-file with a series of numbers separated by newline.
I'm then trying to implement insertion sort.
I have already opened and read the file and i'm able to print the content to terminal.
My problem is now how to split each number from the file in order to compare and sort them.
I believe google is glowing at the moment due to my effort to find and answer (maybe I don't know what I need to type or where to look).
I have tried to get each character from the string, which i'm able to do BUT I don't know to put them together again as integers (we only have integers).
If anybody could help with some keywords to search for it would be much appreciated.
I'm sure that an average vtk user already has seen results like the following more than once.
My question(s): How would you repair such a broken surface? And what is typically the cause for such wholes in the surface?
My particular example was created by using vtkBooleanOperationPolyDataFilter and vtkAppendPolyData, but I've seen such broken, degenerate surfaces also in different occasions.
Many thanks for suggestions.
This is most likely data-related. Suggestions:
Many vtk filters have assumptions about the inputs, and I am guessing your inputs violated some of these assumptions. E.g. vtkBooleanOperationPolyDataFilter expects inputs to be manifolds, otherwise "unexpected results may be obtained". What are you feeding into the boolean filter? Are these inputs manifolds?
Some other filters have much stricter requirements and expect only triangulated surfaces; in the image you posted I think I see quads. Try to run the inputs through vtkTriangleFilter at the beginning of your processing pipeline to split all polys into triangles.
Inspect the second output of vtkBooleanOperationPolyDataFilter which contains the intersection as set of polylines, for any hints on what could be the cause of this.
Try to save the intermediate results into a file and expect them at different stages in your processing pipeline.
If none of this will lead you to the cause of the problem, please post the inputs, the code and vtk version and system that you are running it on, so that we can reproduce your results.
HTH,
Miro
In the case I presented above, the broken surface was caused by problems with the vtkBooleanOperationPolyDataFilter. According to this thread, the algorithm has been improved and is (or will soon be) made available in a newer release of vtk.
I also need to accept the fact that there is no general recipe to recover from such failures in vtk, and, as mirni pointed out, are data-related.
So as many others have asked in the past is there a way to beat the 32k limit per cell in Excel?
I have found ways to do it by splitting the work load into two different .txt files and then merging the two .txt files, however it is a giant PITA and more often then not I end up only using excel to its limits as I do not have time to validate the data after .txt file merges anymore this is a long process and tedious IMO.
However I think that if the limitation is there it is there because it was coded when Microsoft developed Excel, and since they have yet to raise it (2013 version the limit is still the same limit so it would do no good to upgrade)
I also know that many will say if you have a need for information in a single cell in that length then you should use ACCESS well I have no idea how to use ACCESS or how to import a tab delimited file into ACCESS like you would into EXCEL, and then even if I could figure that out I still now have to figure out how to learn all the new commands and he EXCEL equivalents if there is even such a thing.
So I was browsing some blog posts the other day on how to beat limitations by software and I read something about reverse engineering.
Would it be possible to load excel into a hex editor, go in and change every instance of 32767 to something greater?
While 32767 may seem like an arbitrary number, it's actually the upper limit of a 16-bit signed integer (called a short in C). The range of a short goes from -32768 to 32767.
A 16-bit integer can also be unsigned, in which case its range is 0 to 65535.
Since it's impossible for a cell to have a negative number of characters, it seems odd that Microsoft would limit a cell's length based on a signed rather than unsigned 16-bit integer. When they wrote the original program, they probably couldn't imagine anyone storing so much information in a single cell. Using shorts may have simplified the code. (My first computer had only 4K of memory, so it's still amazing to me that Excel can store 8 times that much information in a single cell.)
Microsoft may have kept the 32767 limit to maintain backward compatibility with previous versions of Excel. However, that doesn't really make sense, because the row and column counts greatly increased in recent versions of Excel, making large spreadsheets incompatible with previous versions.
Now to your question of reverse-engineering Excel. It would be a gargantuan task, but not impossible. In the early '90s, I reverse-engineered and wrote vaccines for a few small computer viruses (several hundred bytes). In the '80s, I reverse-engineered an 8KB computer chess program.
When reverse-engineering an executable, you'll need a good disassembler or decompiler. Depending on what you use, you may get assembly-language or C code as the output. But note that this will not be commented code, and you will not see meaningful variable or function names. You'll have to read every line of code to determine what it does. And you'll quickly discover that the executable is the least of your worries. Excel's executable links in a number of DLL files, which would also need reverse-engineering.
To be successful, you will need an extensive knowledge of Windows programming in addition to C or Intel assembly code – not to mention a large amount of patience. Learning Access would be a much simpler task.
I'd be interested in why 32767 is insufficient for your needs. A database may make more sense, and it wouldn't necessarily need to duplicate the functionality of Excel. I store information in a database for output to Web pages, in which case I use HTML+JavaScript for anything that needs to be interactive.
In case anyone is still having this issue:
I had the same problem with generating a pipe-separated file of longitudinal research data. The header row exceeded the 32767 limit. Not an issue unless the end-user opens the file in excel. Work around is to have end-user open file in google sheets, perform the text-to-columns transformation, then download and open file in excel.
https://support.clarivate.com/ScientificandAcademicResearch/s/article/Web-of-Science-Length-limit-of-cell-contents-in-Excel-when-opening-exported-bibliographic-data?language=en_US
Jack Straw from Wichita (https://stackoverflow.com/users/10327211/jack-straw-from-wichita) surely you can do an import of a pipe separated file directly into Excel, using Data>Get Data? For me it finds the pipe and treats the piped file in the same way as a CSV. Even if for you it did not, you have an option on the import to specify the separator that you are using in your text file.
Kind regards
Sefton Hall
I wonder if there is any known algorithm/strategy to add some noise to a text string (for instance, adding a random sequence of characters every now and then or something similar).
I don't want to completely destroy the text just to make it slightly unusable. Also, I'm not interested in reversing back the changes, I can just recreate the original text from the sources I used to create it in the first place if needed.
Of course, a very basic algorithm for doing this could be easyly implemented but probably somebody has already created a somewhat sophisticated algorithm for this. If a Java implementation of something like this is available even better.
If you are using .Net and you need some random bytes maybe try the GetBytes method from the rngcryptoprovider. Nice n random. You could also use it to help in selection random positions to update.