I'm using a gba to gamecube cable to do some homebrew. I am fairly sure that the gamecube side works fine, but the gba side is confusing me.
How do I correctly send and receive data on the GBA side? GBATek suggests the use of some memory mapped registers, but no matter what I try I end up losing or otherwise mangling data.
Related
first of all: What i am trying to do is only for private interest.
I'd like to connect a AT-09/HM-10 BLE-Module with Firmware 6.01 to another device which provides also a BLE Module, which it is not based on the CC254X-Chip,
I am able to communicate with this Device using my Laptop with integrated Bluetooth, Linux and the bluepy-helper. I am also able to make a connection using the HM10 through a USB-RS232-Module and "Hterm", but after that quite Stuck in my progress.
By "reverse-engineering" the Android-Application for controlling this particular device i found a set of Commands, stored as Strings in Hex-Format. The Java-Application itself sends out the particular Command combined with a CRC16-Modbus-Value in addition with a Request (whatever it is), to a particular Service and Characteristic UUID.
I also have a Wireshark-Protocol pulled from my Android-Phone while the application was connected to the particular device, but i am unable to find the commands extracted from the .apk in this protocol.
This is where i get stuck. After making a connection and sending out the Command+CRC16-Value i get no response at all, so i am thinking that my intentions are wrong. I am also not quite sure how the HM-10-Firmware handles / maps the Service and Char-UUIDs from the destination device.
Are there probably any special AT-Commands which would fit my need?
I am absolutely not into the technical depths of Bluetooth and its communication layer at all. The only thing i know is that the HM-10 connects to a selected BLE-Device and after that it provides a Serial I/O and data flows between the endpoints.
I have no clue how and if it can handle Data flow to certain Service/Char UUIDs from the destination endpoint, althrough it seems to have built-in the GATT , l2cap-Services and so on. Surely it handles all the neccessary communication by itself, but i donĀ“t know where i get access to the "front-end" at all.
Best regards !
I'm writing a kernel module that sends and receives internet packets and I'm using Generic Netlink to communicate between Kernel and Userspace.
When the application wants to send an internet message (doesn't matter what the protocol is), I can send it to the Kernel with no problems via one of the functions I defined in my generic netlink family and the module sends it through the wire. All is fine.
But when the module receives a packet, how can I reach the appropriate process to deliver the message? My trouble is not in identifying the correct process: that is done via custom protocols (e.g. IP tables); but it consists in what information should I store to notify the correct process?
So far I keep only the portid of the process (because it initiates the communication) and I have been trying to use the function genlmsg_unicast(), but it was altered in a Kernel version of 2009 in such a way that it requires an additional parameter (besides skb *buffer and portid): a pointer to a struct net. None of the tutorials I have found addresses this issue.
I tried using &init_net as the new parameter, but the computer just freezes and I have to restart it through the power button.
Any help is appreciated.
Discovered what was causing the issue:
It turned out that I was freeing the buffer at the end of the function. #facepalm
I shouldn't be doing so, because the buffer gets queued and it waits there until it is actually delivered. So it is not the caller's reponsability to free the buffer, if the function genlmsg_unicast() succeeds.
Now it works with &init_net.
I am working on a project where an Arduino will send measurements and receive commands through an Ethernet interface and a REST API to open and lock a door. For all means, we can consider that the devices are protected, but that the Ethernet network may be accessed. Therefore, a Man-in-the-middle attack is plausible.
The commands to open/lock the door will be part of the reply of an HTTP GET request. In order to prevent a MITM attack where the response is faked to open the lock, I want to use some kind of encrypted response. Now, Arduinos lack the power to use HTTPS, and I want it to be Arduinos because of costs and ease of development.
I have come up with the following scheme:
Both the Arduino and the server have an identical set of index-value registers. The value will be used as a code to encrypt using AES-128.
When the Arduino sends its GET request, it also sends a randomly selected index, indicating to the server which value to use to encrypt the open/lock command.
The server sends a clear text response (JSON) where the command field is an encrypted text.
The Arduino will decode it and apply the required action.
The Arduino will also send some sensor data from time to time. In this case, it will send the index of the code it used to encrypt the data and the encrypted data.
The set of index-value keys is large, so repetitions are rare (but may occur from time to time).
My question is, is this scheme secure? Am I missing something? Is there any other more tested alternative for securing these interactions that doesn't involve using a more advanced platform?
Thanks in advance!
Use an ESP2866 based Arduino. It does not cost significantly more, it uses the same tools but you can use SSL instead of rolling your own solution. I have used the Wemos D1 boards and they work as a drop in Arduino replacement.
I am working on an application that will send OSC control messages, which is, as I understand a datagram packet, from a web page to an OSC Receiver (server), such as Max/MSP or Node or any other.
I know typically UDP is used because speed is important in the realtime/ audio visual control work done with OSC (which is also the work I will be doing), but I know other methods can be used.
Right now for instance, I am sending OSC from the browser to a node.js server (using socket.io), and then from the node.js server to Max (which is where I ultimately want the data), also using socket.io. I believe this means I am using websockets and the delay/latency has not been bad.
I am curious though, now that WebRTC is out, if I should place the future of my work there. In all of my work with OSC I have always used UDP, and only used the Socket.io/Websockets connection because I didn't know about WebRTC.
Any advice on what I should do. Specifically I am interested in
1. How I could send OSC messages from the browser directly to an OSC server (as opposed to going through a node server first)
2. If I should stay with the Node/Socket.io/Websocket method for sending OSC data or should I look into WebRTC?
Regarding your first question - if there is a solution for a direkt Websocket link between browser and server (for OSC) - you can have a look into this:
ol.wsserver object for Max/MSP by Oli Larkin: https://github.com/olilarkin/wsserver
I published an osc-js utility library which does the same with a UDP/Websocket bridge plugin:
https://github.com/adzialocha/osc-js
This is not really a question about any specific technical challenge, but a discovery challenge. And is pretty much opinion based.
But I will try to provide my thoughts, as others can be useful too.
If OSC server support WebSockets or WebRTC connection, then you might use one of them directly to it.
Based on nature of browsers, you probably need to support many protocols rather than one specific. As many browsers have different support: http://caniuse.com/rtcpeerconnection unless your users can be "forced" to use specific browser.
WebRTC is meant to be high-performance. I'm not sure it is good to compare it with UDP at all, as it is much more than UDP in comparison as well as implementation vary (inside) per browser.
You need to talk to server rather than between clients, while WebRTC mainly is designed for peer-to-peer communications. So you need precisely investigate into benefits on performance from latency as well as CPU point of view on server side.
Benefit of UDP is the fact that some packets can be dropped/skipped/undelivered, as it is not "mandatory" data, like picture frames or piece of sound, that only results on reduced quality, which is "expected" behaviour in streaming world. WebSockets will ensure that data is delivered, and "price" for it is mechanics that at the end slows down the queue of packets, to ensure ordered and guaranteed delivery of data.
I'm writing a driver in Linux kernel that sends data over the network . Now suppose that my data to be sent (buffer) is in kernel space . how do i send the data without creating a socket (First of all is that a good idea at all ? ) .I'm looking for performance in the code rather than easy coding . And how do i design the receiver end ? without a socket connection , can i get and view the data on the receiver end (How) ? And will all this change ( including the performance) if the buffer is in user space (i'll do a copy from user if it does :-) ) ?
If you are looking to send data on the network without sockets you'd need to hook into the network drivers and send raw packets through them and filter their incoming packets for those you want to hijack. I don't think the performance benefit will be large enough to warrant this.
I don't even think there are normal hooks for this in the network drivers, I did something relevant in the past to implement a firewall. You could conceivably use the netfilter hooks to do something similar in order to attach to the receive side from the network drivers.
You should probably use netlink, and if you want to really communicate with a distant host (e.g. thru TCP/IPv6) use a user-level proxy application for that. (so kernel module use netlink to your application proxy, which could use TCP, or even go thru ssh or HTTP, to send the data remotely, or store it on-disk...).
I don't think that having a kernel module directly talking to a distant host makes sense otherwise (e.g. security issues, filtering, routing, iptables ...)
And the real bottleneck is almost always the (physical) network itself. a 1Gbit ethernet is almost always much slower than what a kernel module, or an application, can sustainably produce (and also latency issues).