I made a Program on GitHub called: https://github.com/bigboy32/MacOSmojave
But the only issue is it is doing it based on the time, not based on the sunlight
So what is the Error/Question
let's say it is 7 pm and it is light outside and the pc is showing 'An Evening' background!
So that's my Question
Thanks to all the answers!
You can use any weather measurement service with opens API to retrieve the information you want, like this one
This API returns the UVI values, you can interprete this and create the logic, since the UVI will be lower when the sun is going down.
Easy! Get a Raspberry Pi Zero W for $12 and attach a Light Dependent Resistor to it. Read the light once a minute and send the reading to your computer via WiFi.
Related
I know, this question has been asked a lot of times. Until yesterday i thought that the answer was "yes, it is possible but you can not obtain an accurate result of your position". My idea is to take a BLE badge in my hand and with other 4 devices, positioned on the ceiling, obtain my current position using the trilateration. After weeks of resarch, i concluded that this method could not be as accurate as i'd like it to be, so i went over.
Now, what about this video? Youtube by Loopd.
They use bluetooth badges, but how they obtain these results?
Thanks to everyone
The results of Bluetooth LE indoor location can be quite accurate, but it requires some processing of the raw signals rather than simple triangulation. Essentially you weight different beacons differently in your position calculation based on how far away they are and filter to smooth the result.
There is a working example as open source at http://vor.space/
Hy guys. At school we use badge for mark who is present, for my exam i want to upgrade that system.
I would like to create a face recognition system, basically i would like to set a raspberry with camera over the doors, like that, when students pass the door will be automatically marked as present.
I know OpenBR but i didn't understand if i can use it for my project, and i have some issues with it, i can't install it, it return me an error when i test it.
I ask you if you know if OpenBR can do the trick for me (you have to know that we are a lot at school), or if there are some other technologies that i can use.
You could look at using opencv to train an object detector to look for the badge:
http://docs.opencv.org/2.4/doc/user_guide/ug_traincascade.html
https://www.youtube.com/watch?v=WEzm7L5zoZE
If each of the badges have some unique identifier for the students, you could then analyse the identifier to take attendance.
Identifying the badge / face would be the "easy" part. Identifying the student would be the hard part!
Identifying people from photos is tricky, and I would estimate that Facebook has spent millions on this problem.
Here are a couple of links that may be useful
http://scikit-learn.sourceforge.net/0.6/auto_examples/applications/plot_face_recognition.html
OpenCV identify person with face detection
You use Raspberry Pi for your project, so
Software:
1.OpenCV-Python is a very good choice.
2. SimpleCV is more simple to use but less power than OpenCV. It's still ok for your purpose.
Hardware:
You also need to be aware of hardware, using USB Webcam is not a good choice because of slow speed.
Module camera is better because it uses serial interface to transfer data.
I'm trying to interface a Nexys3 board with a VmodTFT via a VHDCI connector. I am pretty new to FPGA design, and although I have experience with micro-controllers. I am trying to approach the whole problem as a FSM. However, I've been stuck on this for quite some time now. What signals constitute my power up sequence? When do I start sampling data? I've looked at the relevant datasheets and they don't make things very clearer. Any help would be greatly appreciated (P.S : I use Verilog for the design).
EDIT:
Sorry for the vagueness of my question. Here's specifically what I am looking at.
For starters, I am going to overlook the touch module. I want to look at the whole setup as a FSM. I am assuming the following states:
1. Setup connection or handshake signals
2. Switch on the LCD
3. Receive pixel data
4. Display video
5. Power off the LCD
Would this be a reasonable FSM? My main concerns are with interpreting the signals. Table 5 in the VmodTFT_rm manual shows a list of signals; however, I am having trouble understanding what signals are for what (This is my first time with display modules). I am going to assume everything prefixed with TFT_ is for the display and everything with TP_ is for the touch panel (Please correct me if I'm wrong). So what signals would I be changing in each state and what would act as inputs?
Now what changes should I make to accommodate the touch panel too?
I understand I am probably asking for too much, but I would greatly appreciate a push in the right direction as I am pretty stuck with this for a long time.
Your question could be filled out a little better, it's not clear exactly what's giving you trouble.
I see two relevant docs online (you may have seen these):
Schematic: https://digilentinc.com/Data/Products/VMOD-TFT/VmodTFT_sch.pdf
User Guide: https://digilentinc.com/Data/Products/VMOD-TFT/VmodTFT_rm.pdf
The user guide explains what signals are part of the Power up sequence
you must wait between 0.5ms and 100ms after driving TFT-EN before you can drive DE and the pixel bus
You must wait 0 to 200ms after setting up valid pixel data to enable the display (with DISP)
You must wait 160ms after enabling DISP before you start pulsing LED-EN (PWM controls the backlight)
Admittedly the documentation doesn't look great and some of the signals names are not consistent, but I think you can figure it out from there.
After looking at the user guide to understand what the signals do, look at the schematic to find the mapping between the signal names and the VHDCI pinout. Then when you connect the VHDCI pinout to your FPGA, look at your FPGA's manual to find mapping between pins on the VHDCI connector and balls of the FPGA, and then you can use the fpga's configuration settings to map an FPGA ball to a logical verilog input to your top module.
Hope that clears things up a bit, but please clarify your question about what you don't understand.
I am using a MPU6050 IMU to map the path of a device (with starting point as origin). For this I need to convert the accelerometer and gyroscope readings into (Cartesian)co-ordinates. I think I need to continuously sample the accelerometer readings and go on adding (integrating) the sample to the previous point for each axes respectively. At startup the previous point will be (0,0,0).
I know this on paper. But I dont think it will be that simple. How will I know when the device is moving backwards, ie towards the origin?
The MPU6050 provides accleration and gyro reading in all axes. I used this to fetch the values. But I dont know how to continue. So what I need is an "Inertial Navigation system" which takes acceleration and angular velocity vectors as well as the current position as input and returns the new position. I know this will have errors, but I am not concerned about that for now.
If someone can guide me in this that would be great. Any hints or pointers will be appreciated.
Kiran G
Kiran,
To answer that question it would be good to know what kind of Gyro are you using or willing to use. It is very different depending if the output is an analog signal (voltage or current loop) or if that is any kind of (normally serial) bus.
Please note that most likely you will have also to filter the signal based on the expected dynamics of the environment.
I'm looking into making a project with the Kinect to allow my Grandma to control her TV without being daunted by using the remote. So, I've been looking into basic gesture recognition. The aim will be to say turn the volume of the TV up by sending the right IR code to the TV when the program detects that the right hand is being "waved."
The problem is, no matter where I look, I can't seem to find a Linux based tutorial which shows how to do something as a result of a gesture. One other thing to note is that I don't need to have any GUI apart from the debug window as this will slow my program down a fair bit.
Does anybody know of something somewhere which will allow me to in a loop, constantly check for some hand gesture and when it does, I can control something, without the need of any GUI at all, and on Linux? :/
I'm happy to go for any language but my experience revolves around Python and C.
Any help will be accepted with great appreciation.
Thanks in advance
Matt
In principle, this concept is great, but the amount of features a remote offers is going to be hard to replicate using a number of gestures that an older person can memorize. They will probably be even less incentivized to do this (learning new things sucks) if they already have a solution (remote), even though they really love you. I'm just warning you.
I recommend you use OpenNI and NITE. Note that the current version of OpenNI (2) does not have Kinect support. You need to use OpenNI 1.5.4 and look for the SensorKinect093 driver. There should be some gesture code that works for that (googling OpenNI Gesture yields a ton of results). If you're using something that expects OpenNI 2, be warned that you may have to write some glue code.
The basic control set would be Volume +/-, Channel +/-, Power on/off. But that will be frustrating if she wants to go from Channel 03 to 50.
I don't know how low-level you want to go, but a really, REALLY simple gesture recognize could look at horizontal and vertical swipes of the right hand exceeding a velocity threshold (averaged). Be warned: detected skeletons can get really wonky when people are sitting (that's actually a bit of what my PhD is on).