Scilab/Xcos: handling strings in Xcos & toolbox function in Xcos - string

I want to create a simulation in Xcos (part of Scilab) representing the real Arduino Uno system. That means changing its input values during the simulation based on output. The problem is that I need to find the way how to handle strings as input and output. How is this possible?
The solution that comes to my mind is to somehow use Atoms Serial Communication Toolbox functions like writeserial() and readserial() in my Xcos scheme. But I do not have any idea if this is even possible. Any idea?

I managed to use those functions in my Xcos scheme by putting them into Scilab function block (scifunc_block_m) and then parsing them to get the correct output. And it seems that for handling strings in Xcos for input/output it is possible to convert string to ascii() and work with that.

Related

How to capture keyboard input during runtime in Verilog?

I've been trying to find a way to capture keyboard input during runtime simulation of my Verilog code. Is this even possible?
I have taken a look at resources like asic-world and the Quick Reference for Verilog found on Google, but found nothing regarding a way to take keyboard inputs.
There seems to be a fundamental misunderstanding here in the difference between a hardware description language used to simulate a design versus using that same description to implement a design in actual hardware. It's like drawing a picture of a pinwheel, blowing on that picture, and expecting the pinwheel to start turning.
You can certainly build a 3-D model of that pinwheel, simulate the force of the wind on that model and watch it turn, and then send that model to a 3-d printer to get your pinwheel. I suppose you could put wind sensors in front of your monitor, and write a program that converts a value from the sensor to a value used in the simulation. The point is, the simulator has no knowledge that the value came from someone blowing on the monitor, it just sees a parameter value change.
Unless you are designing the keyboard hardware yourself and simulating that, there really is not much point in taking keyboard input from a computer and using that to stimulate your design in simulation. The operating system has already abstracted away the keyboard hardware and provides you with a string of character codes. The reason you are simulating in the first place is to verify the functionality of your design. If you find a problem, you are going to want to replay the exact same stimulus until you fix your problem.
Just like the pinwheel example, I do know it's possible for someone to set up a program that reads keyboard input and provides that as stimulus to a simulation. But that involves inter-process communication(IPC) and specific tool knowledge to set that up.

How to divide input in parallel crc?

I am trying to understand the working of a parallel crc using look up tables, I could get the basic sarwate code running correctly but I am having a lot of confusion when it comes to appending or prepending zeros.
I am trying to use this code for a parallel crc generation but I am confused on how to divide which part of the input data and append zeros.
Please help, I am really stuck here.
You can see how to combine CRCs computed in parallel by looking at the source code for zlib's crc32_combine() routine. You do not need to prepend or append zeros.

GNU Assembly split string of integers to integers

I'm working on a project for school.
The assignment is as follows:
Implement a sorting algorithm of your choosing in assembly (we are using the GNU Assembler). The input is a text-file with a series of numbers separated by newline.
I'm then trying to implement insertion sort.
I have already opened and read the file and i'm able to print the content to terminal.
My problem is now how to split each number from the file in order to compare and sort them.
I believe google is glowing at the moment due to my effort to find and answer (maybe I don't know what I need to type or where to look).
I have tried to get each character from the string, which i'm able to do BUT I don't know to put them together again as integers (we only have integers).
If anybody could help with some keywords to search for it would be much appreciated.

Is it possible to get both angular position and edge counting for an NI quadrature encoder from the same DAQ channel in LabVIEW?

I tried to run the code below, but it doesn't let me select the same DAQ channel for both readings, despite the fact they should be taken from the same DAQ channel/encoder. Any suggestions would be welcome.
You can use the same input terminal to perform both measurements, but you cannot use the same counter to do so. I cannot see the values for your Counter(s) IO Controls, but I suspect they are requesting the driver use the same counter to do two different things.
Try using two counters like this instead. See how PFI8 is used as the input terminal for both tasks:

Adding some noise to a text

I wonder if there is any known algorithm/strategy to add some noise to a text string (for instance, adding a random sequence of characters every now and then or something similar).
I don't want to completely destroy the text just to make it slightly unusable. Also, I'm not interested in reversing back the changes, I can just recreate the original text from the sources I used to create it in the first place if needed.
Of course, a very basic algorithm for doing this could be easyly implemented but probably somebody has already created a somewhat sophisticated algorithm for this. If a Java implementation of something like this is available even better.
If you are using .Net and you need some random bytes maybe try the GetBytes method from the rngcryptoprovider. Nice n random. You could also use it to help in selection random positions to update.

Resources