I have an idea that I have been working on, but there are some technical details that I would love to understand before I proceed.
From what I understand, Linux communicates with the underlying hardware through the /dev/. I was messing around with my video cam input to zoom and I found someone explaining that I need to create a virtual device and mount it to the output of another program called v4loop.
My questions are
1- How does Zoom detect the webcams available for input. My /dev directory has 2 "files" called video (/dev/video0 and /dev/video1), yet zoom only detects one webcam. Is the webcam communication done through this video file or not? If yes, why does simply creating one doesn't affect Zoom input choices. If not, how does zoom detect the input and read the webcam feed?
2- can I create a virtual device and write a kernel module for it that feeds the input from a local file. I have written a lot of kernel modules, and I know they have a read, write, release methods. I want to parse the video whenever a read request from zoom is issued. How should the video be encoded? Is it an mp4 or a raw format or something else? How fast should I be sending input (in terms of kilobytes). I think it is a function of my webcam recording specs. If it is 1920x1080, and each pixel is 3 bytes (RGB), and it is recording at 20 fps, I can simply calculate how many bytes are generated per second, but how does Zoom expect the input to be Fed into it. Assuming that it is sending the strean in real time, then it should be reading input every few milliseconds. How do I get access to such information?
Thank you in advance. This is a learning experiment, I am just trying to do something fun that I am motivated to do, while learning more about Linux-hardware communication. I am still a beginner, so please go easy on me.
Apparently, there are two types of /dev/video* files. One for the metadata and the other is for the actual stream from the webcam. Creating a virtual device of the same type as the stream in the /dev directory did result in Zoom recognizing it as an independent webcam, even without creating its metadata file. I did finally achieve what I wanted, but I used OBS Studio virtual camera feature that was added after update 26.0.1, and it is working perfectly so far.
Rfid Readers perform switches between antennas while using multiple antennas. Reader runs one antenna while others sleeping and switches one by one. It makes it fast so running one antenna at a time doesn't matter. According to my observations, the time for every switch is 1 second.
(After sometime I realised this 1 second is only for Motorola FX7500. Most other readers do it the right way, light fast like in miliseconds)
That is what I know so far.
Now, in my specific application I need this procedure to run faster, like 200ms instead of 1s.
Is this value changeable? If so, which message and parameter in LLRP can modify this value?
Actually the 1 second problem is with MotorolaFX7500 reader. By examining LLRP messages that Motorola's own library generates between FC7500, I discovered there are vendor specific parameters that can be used via custom extensions fields of LLRP. These params and settings can be found in Motorola Readers' software guide. This switch time is one of these vendor specific parameters, it's not a parameter of generic LLRP. A piece of code generating LLRP message including the custom extension with the proper format, solved my issue.
I am trying to write an application(I'm a gui first timer) for my son, he has autism. There is a video player in the top half and a text entry area in the bottom. When letters are typed sounds are produced to mimic the words in the video.
There have been other posts on this site in regard to playing sounds on key presses, using gstreamer as a system call. I have also tried libcanberra but both seem to have significant delays between sounds. I can write the app in python or C but will likely do at least some of it in C.
I also want to mention that the video portion is being played by gstreamer. I tried to create two instances of gstreamer, to avoid expensive system calls but the audio instance seemed to kill the app when called.
If anyone has any tips on creating faster responding sounds I would really appreciate it.
You can upload a raw audio sample directly to PulseAudio so there will be no decoding and (perhaps save) extra switches by using the following function from Canberra:
http://developer.gnome.org/libcanberra/unstable/libcanberra-canberra.html#ca-context-cache
The next ca_context_play() will use it.
However, the biggest problem you'll encounter with this scenario (with simultaneous video playback) is that the audio device might be configured with large latency with PulseAudio (up to 1/2s or more for normal playback). It may be reasonable to file a bug to libcanberra to support a LOW_LATENCY flag, as it currently doesn't attempt to minimize delay for sound events afaik. That would be great to have.
GStreamer pulsesink could probably get low latency too (it has some properties for that), but I am afraid it won't be as lightweight as libcanberra, and you won't be able to cache a sample for instance. Ideally, GStreamer could also learn to cache samples, or pre-fill PulseAudio...
I'm creating a very simple application to read the info from a GPS. The information is sent on the bluetooth (COM3) in the NMEA0183 format.
Everything works good except that I can't find my position because the RMC and GGA sentence are empty. I receive other sentence with the satellite informations and positioning, but all I want is my current position (long/lat)
Here is some example of what I currently receive:
$GPZDA,,,,,,*48
$GPGGA,,,,,,,,,,,,,,*56
$GPGLL,,,,,,,*7C
$GPRMC,,,,,,,,,,,*67
$GPGST,,,,,,,,,,,,,,*57
$GPGSA,M,3,09,18,22,14,,,,,,,,,12.2,11.8,3.0*31
$GPGRS,,,,,,,,,,,,,,*51
$GPVTG,30.124,T,30.124,M,0.067,N,0.125,K*49
$GPGSV,2,1,08,22,78,283,50,18,60,137,50,14,54,281,48,09,44,052,48*7F
$GPGSV,2,2,08,46,34,212,,51,28,222,,48,12,247,,35,06,254,*74
I tried with Putty, GPS .NET 3.0.2 and my own program and the result is the same. BUT when I connect with the proprietary software called eZField, the GPS gets a fix after 20 seconds and I can see the long/lat showing. In EZField, I can't see the raw format and since it is on a pocket PC, I don't know how to sniff the bluetooth data to see if the software send any information to the GPS.
My best guest is that EZField sends some information to the GPS receiver to tell him to start sending RMC and GGA. I've read somewhere that there are "Initialization strings" that can be sent to a GPS but I can't find information about this anywhere. My GPS is a ViaSAT L1-GPS Receiver/SBAS.
Anyone can help me? :)
Thank!
It looks like your GPS doesn't have a fix yet. It is odd though that the GPS doesn't simply start searching on its own. Most initilization strings are strictly for what type of data to send back (such as the SiRFstar III proprietary format).
Pair this to your PC and run the software on it after you have started up some serial port monitoring software. That way, you can see what the init. string is, if there is one.
I use this regularly: http://www.serial-port-monitor.com/
I would to know how RealVNC remote viewer works.
It frequently send screenshots to the client in real time ?
or does it use other approach ?
As a very high-level overview, there are two types of VNC servers:
Screen-grabbing. These servers will capture the current display into a buffer, compare it to the client state, and send only the rectangles that differ to the client.
Hook-assisted. Hooking into the display update process, these servers will be informed when the screen changes by the display manager or OS. They can then use that information to send only the changed rectangles to the client.
In both cases, it is effectively a stream of screen updates; however, only the changed regions of the screen are transmitted to the client. Depending on the version of the VNC protocol in use, these updates may be compressed as well.
(Note that the client is free to request a complete screen update any time it wants to, but the server will only do this on its own if the entire screen is changed.)
Also, screen updates are not the only things transmitted. There are separate channels that the server can use to send clipboard updates and mouse position updates (since a user physically at the remote machine may be able to move the mouse too).
The display side of the protocol is
based around a single graphics
primitive: “put a rectangle of pixel
data at a given x,y position”. At
first glance this might seem an
inefficient way of drawing many user
interface components. However,
allowing various different encodings
for the pixel data gives us a large
degree of flexibility in how to trade
off various parameters such as network
bandwidth, client drawing speed and
server processing speed. A sequence of
these rectangles makes a framebuffer
update (or simply update). An update
represents a change from one valid
framebuffer state to another, so in
some ways is similar to a frame of
video. The rectangles in an update are
usually disjoint but this is not
necessarily the case.
Read here to find out more how it works
Yes. It just sends some sort of screenshots (compressed and which reuses unchanged portions of the previous screenshot).
This is by the way the VNC protocol, any client work that way (although the actual way to compress images etc etc may change).
Essentially the server sends Frame Buffer Updates to the client and the client sends keyboard and mouse input and frame buffer update requests to the server.
Frame Buffer Update messages can have different encodings, but in essence they are different ways of representing square screen areas of pixel data. Generally the client asks for Frame Buffer Updates for the entire screen but it can ask for just an area of the screen (for example, small screen clients showing a viewport of the servers screen). The server then sends a FBU (frame buffer update) that contains rectangles where the screen has changed since the last FBU was sent to the client.
The best reference for the RFB/VNC protocol is here. The IETF has a recent (2011) standards document RFC 6143 that covers RFB although it is not an extensive as the reference guide.
It essentially works by sending screenshots on the fly. ("Real time" is something of a misnomer here in that there is no clear deadline.) It does attempt to optimize by only sending areas of the screen that have changed, and some forks of the VNC code line use a mirror driver to receive notification when areas of the display are written to, while others use window message hooks to detect repaint requests.