I have Dji Phantom 4 and I need to track object using it so can I give instructions to it using onboard sdk as its only for Matrice 100 and if not is there any other way to five it instructions using ROS or with linux operating system.
With a Phantom 4, you can't use the OnboardSDK.
To achieve what you want your best chance is the MobileSDK.
It's available on iOS and Android though.
With it, you can get the video feed of the camera and do the tracking you want, convert it into virtual stick commands and have the drone track visually the subject you want.
Hope this helps.
Related
Does Microsoft deliver a generic Windows Embedded Compact 7 SDK?
I'm aware that I in principle should get the SDK from the manufacturer of the hardware who would use their Platform builder to create it. But when making software that's supposed to be used by many different customers on a lot of different CE 7 hardware, it's impossible.
The SDK that is provided by Microsoft for CE 5 works very well on most CE 5 (6 and 7) devices so it seems like it is possible to have an (almost) generic SDK.
Unfortunately, the "standard SDK" os component is not supported on CE 6,7 and 8(2013) and this prevents device manufacturers from providing a minimum set of features covered by it.
This means that you'll have to build your own SDK to develop a generic app, including all the features you need (and with no grants that it will work on a specific device) or to rebuild the app for each device you target (you may discover if it will lack some components, but may be hard to maintain if you target a large number of different platforms).
If you don't want to build your own images and SDKs, Toradex provides quite generic SDKs for ARM (covering the components we put in our image that are more or less those in the core licenses).
I recently purchased Micro:Bit. I've seen that micro-python and bluetooth cannot be used at the same time due to memory capacity.
Does anyone know if I would be able to build a decent application using the javascript block programming?
The app basically has to do the following:
Read data from acceleretometer.
Acumulate some accelerometer data.
Send the information to another device connected via bluetooth.
Yes, you should be able to write a program for the microbit that does this. the official documentation describes the services that are available. I also found an example which suggests that there is an app which you can use at the phone end if that's relevant to your application.
The micropython restriction is a combination of the BLE protocol stack requiring 12 kB of RAM, and python being interpreted (so having a high RAM requirement).
You can chose the block version or test javascript - and should be able to write reasonably complex programs (even if the text entry might be best done in an editor). As a final fall-back, you can fall back on C/C++ using the microbit DAL (which seems to be built on top of the mbed offline toolchain).
This is one of my coding projects. I'm fairly new to linux, so I need some pointers and thoughts from you guys, before I get started. I know there exists screen sharing software already, but I want to make my own! (=
Specifically, I want to clone my laptop screen to my TV over WLAN, via a linux box that is connected to a TV through a VGA cable:
Laptop streams it's screen
Linux box reads the stream
Linux box outputs the stream into the TV (through a VGA cable)
First of all, how do I record the screen and send the stream in real time in linux?
Secondly I must write a program that reads the stream being sent. The program must listen to some port, and collect the data being streamed from the laptop. Any thoughts?
Then I must output that data in real time to the TV. Do you how any ideas on how to solve this?
Thanks!
Edit: Regarding programming languages, I'm most comfortable with python.
Sharing your screen can be done via the various flavors of VNC (ie. RealVNC, TightVNC, UltraVNC, etc.). Most of them are Open Source, you might want to:
Stick with the VNC protocol for later compatibility
Take example of how the established solutions does for screen-hooking.
In Linux, the graphics are all processed by Xorg (new version of X Server), which was developed with networking embedded. This explains why you can ssh -X into a machine, execute a graphical interface on it and see it on your remote computer. I recommend you to read about hooks on Xorg to achieve your needs.
You need a client-server topology to achieve your needs. You are not talking about any programming language you forecast to use, though. Some languages may be harder than some to start with. Furthermore, this kind of code is already really well understood under every major programming language. You should try to at least use a framework that simplifies your networking portion of the project.
Sharing a screen on the TV can be done by your video card driver in Linux. Just check on your Desktop Environment (KDE and Gnome offers video configuration panels, for example) or in your video card configuration (nVidia and ATI Linux drivers offers multiple screen support)
It seems to me like you're trying to reinvent the wheel and are not too sure about how to begin. I recommend you to begin simple with one of the already proven VNC software and see how it goes from there. If a feature is missing, you've got the source code of the server and the client, so you can continue development of these projects. Once you've got your setup working, start thinking about replacing a single piece of the puzzle by your own code, and see how it goes.
Do not expect good (full HD, for instance) video quality on your TV without some very capable CPU/GPU and a 802.11n wireless network empty of users and be ready to accept some lag for the codecs to kick in.
You should try to take as small steps as possible. If I were taking up such a project, my first step would be to try to implement a solution using standard unix tools (e.g. netcat or socat for the network part, mplayer or vlc for the playback and maybe ffmpeg for the capture)? Then, replace each component with custom-written ones if needed.
I wanna to simulate touch event in Windows / Mac OS X or Linux (OS is not critical).
Under Windows Vista and Windows 7, the Multi-Touch Vista drivers let you use 2 mice to simulate multi-touch gestures. It should degrade nicely to the "normal" touch experience. In my experience, it can be fairly tricky, but it works.
It really kind of depends on what you're working on and what the goal is. If you can separate the response from the action in your application, that will make it a lot easier to test something like this without going through a lot of hassle.
If you wind up needing a touch device, Wikipedia has a nice list of multi-touch devices.
A lot of time passed since this question, however, maybe someone will google for similar question and find that there is another option:
If you have android device, you can use your device as touch screen for windows.
Maybe there are other software to do this, but I used following one:
https://play.google.com/store/apps/details?id=com.tnksoft.winmultitouchfree
This program from some japanese guy coupled with desktop app does the trick.
You can get desktop app from his site http://www.tnksoft.com/ .
Unluckily it is japanese site, but it's not hard to find necessary program there(it has easily recognisible icon)>
Alternatively you can get link once you install android app.
I am a .NET programmer who needs to port one good Desktop OTP system already at work to be used into cell phones. As far I know J2ME is the correct answer to do it. I'll appreciate any good advice about IDE, first steps, books or any other information.
Well, Eclipse IDE have good J2ME support, or so I've heard.
For api, read the javadocs:
http://java.sun.com/javame/reference/apis.jsp
You'll have to figure out which device you want to target, and grab its emulator.
Then, proceed making a hello world app with the aid of tutorials.
I would give NetBeans a try as well. Eclipse and NetBeans are very similar, but the differences can be night and day depending on your personal preferences. NetBeans also has great J2ME project support, and it is plug and play for any emulator of a device you may need to target, though I recommend sticking to the default or SonyEricsson's. Motorola's was always buggy and never reflected the device at all, and Nokia's was always sloooow.
Also, there are a ton of devices out there. Before you jump head first into this you should define a scope of exactly which devices you will need to target. This will have a huge impact on scheduling as porting is no small task.
Finally, just get your hands on the actual devices you need to target. Emulator is a good way to start, but there are always so many nuances and problems that pop up once you throw the app on the device that it's best to have your target devices from day one.