NFC/RFID tagging to trigger music on/off when objects enter/exit area? - audio

I'm working on a sound installation for a college project in which a selection of objects can be placed in a basin with each one triggering a sound on entry, so the sounds accumulate with more objects allowing users to make their own 'mix'
I figured RFID tagging would probably be the best way to go for this, so a reader at the bottom of the basin senses nearby objects. The main problem I have is that I don't know how to trigger multiple tags at the same time with one reader.
I planned to use a raspberry pi to power an NFC reader, connected to a midi shield that would separate the value of each tag into different midi channels?
Was just wondering whether this was an appropriate route to go down, and if anyone has any other ideas? I don't Any suggestions would be hugely appreciated, the technical aspects of this project are really very new to me, and I'm not sure where to begin.
Thank you very much

Related

Need advice on hardware stack for Wireless Audio solution

Good day!
Problem definition:
Current implementations of Bluetooth does not allow to simply support good quality of Audio(Earphones mode) and 2-way audio transition (Headset mode).
Also, even if one would manage to set this configuration up, which have huge limitations on the hardware/software used, there is no way to handle sound input from 2 different audio devices simultaneously.
So, technically - one cannot just play the Game, communicate on the Discord, and optionally listen to some music, unless he is bound to some USB-bundled earphones. Which are usually really crappy, or really expensive. Or both.
Solution sketch:
So, I came up with an idea that one can actually build such device, using Raspberry Pi, Arduino, or even barebone-component-based stacks.
Theoretical layout of connections per-se would look somehow like that:
Idea is to create 2 "simple" devices
One, not-so-portable, that would handle several analog inputs, and one analog output
One, portable, that would handle single analog Input and Output, and could be used with any analog earphones.
"Requirements" to such system would be quite simple:
This bundle have to handle Data Transition on some distance, preferably up to 10 meters, or more.
The "Inlet" device should be portable enough to keep it in the pocket, or in an arm band, or something
Sound Quality should be at the very least on the level of Bluetooth headphones profile, or if possible - even better
If possible - it would be nice to keep the price of the Solution under 500 Euros, but I'm so tired of current state of things that I might consider raising the budget...
Don't mind the yellow buttons on the Outlet device. Those are optional, and will depend on the implementation stack :)
Question:
Can anyone advice me which component-base would be a better solution to making such a tool, and why?
And maybe someone actually knows of similar systems already existing?
Personally I would prefer anything but the barebone-components-based solution, just because I'm really rusty with that area, and it requires quite the amount of tools, to handle it properly.
While using pre-built modules can save me from buying most of the hardware tools, minifying my "hardware customization" part of this solution, leaving only software part to handle (which is my main area of expertise).
But then again, if there are some experts here, that would consider other stacks non-viable - I would really appreciate to see their reasonings.
P.S. Just to be clear: If this project will prove viable - I will implement it, and share the implementation details with the communities. I am not the first one who needs such system, and unfortunately it seems that Hardware/Software vendors are not really interested in designing similar solutions...
I happen to find a "temporary" solution.
I've came across a wireless headset, that allows to simultaneously support Wireless USB Bundle connection, and Bluetooth connection to different devices, and provide nice way of controlling sound input/output with both connections.
This was almost a pure luck, as this "feature" was not described anywhere in the specs...
Actual headset name is:
JBL Quantum 800
This does not closes the question per-se, as I still plan to implement this "Summer Project" at some point, but I believe this information might be useful to those searching for similar solutions.

Gesture Recognition for my Grandma (Kinect) Linux

I'm looking into making a project with the Kinect to allow my Grandma to control her TV without being daunted by using the remote. So, I've been looking into basic gesture recognition. The aim will be to say turn the volume of the TV up by sending the right IR code to the TV when the program detects that the right hand is being "waved."
The problem is, no matter where I look, I can't seem to find a Linux based tutorial which shows how to do something as a result of a gesture. One other thing to note is that I don't need to have any GUI apart from the debug window as this will slow my program down a fair bit.
Does anybody know of something somewhere which will allow me to in a loop, constantly check for some hand gesture and when it does, I can control something, without the need of any GUI at all, and on Linux? :/
I'm happy to go for any language but my experience revolves around Python and C.
Any help will be accepted with great appreciation.
Thanks in advance
Matt
In principle, this concept is great, but the amount of features a remote offers is going to be hard to replicate using a number of gestures that an older person can memorize. They will probably be even less incentivized to do this (learning new things sucks) if they already have a solution (remote), even though they really love you. I'm just warning you.
I recommend you use OpenNI and NITE. Note that the current version of OpenNI (2) does not have Kinect support. You need to use OpenNI 1.5.4 and look for the SensorKinect093 driver. There should be some gesture code that works for that (googling OpenNI Gesture yields a ton of results). If you're using something that expects OpenNI 2, be warned that you may have to write some glue code.
The basic control set would be Volume +/-, Channel +/-, Power on/off. But that will be frustrating if she wants to go from Channel 03 to 50.
I don't know how low-level you want to go, but a really, REALLY simple gesture recognize could look at horizontal and vertical swipes of the right hand exceeding a velocity threshold (averaged). Be warned: detected skeletons can get really wonky when people are sitting (that's actually a bit of what my PhD is on).

Choosing an audio API

I'm struggling to choose between a vast number of audio programming languages and APIs. I'm very (totally) new to audio programming so please bear with me.
Software
I need to be able to:
Alter volume of different sounds before outputting them to anything (these sounds can have a variety of different origins, for example mp3s and microphone input)
phase shift sounds
superimpose sounds that I have tweaked (as per items 1 and 2)
control the output to each of 8 channels independently of one another
make this all happen on Windows7
These capabilities need be abstracted by a graphical frontend I will probably make myself. What I want to be able to do is create 'sound sources' and move them around a 3D environment along either pre-defined trajectories and/or in relation to the movement of whoever is inside the rig. The reason I want to do pitch bending is so I can mess with red-shift stuff.
I don't want to have to construct full tracks before-hand and just play them. I want the sound that is played to depend on external input from sensors as well as what I am doing on the frontend.
As far as I know this means I cant use any existing full audio making app.
The Question
I've been looking around for for the API or language I should use and I have not turned up a blank, quite the opposite actually. I'm struggling to narrow down my search. A lot of my problem stems from the fact that I have no experience in audio programming.
So, does anyone know off-hand of an API or language that meets my criteria?
Hardware stuff and goals
(I left this until last because I'm not sure how relevant it is)
My goal is to make three rings of speakers at different heights and to have enough control over them to be able to simulate any number of 'sound sources' within the array. The idea is to have someone stand in the middle of the rig and be able to make it sound like there are lots of things moving around them. To get this working I'm planning on doing a little trig and using 8 channels of audio from my PC. The maths is pretty straight forward, it just the rest that I need to worry about
What I want to do next is attach a bunch of cameras to the thing and do some simple image recognition stuff to be able to 'attach sound sources' to different objects. Eg. If someone is standing in the right place it can be made to seem as though all red balls quack like a duck, and all orange ones moan hauntingly.
This is not to detract from Richard Small's answer, but to comment on some of the other options out there:
If you are looking for something higher-level with which you can prototype and develop this faster, you want max/msp or it's open source competitor puredata. These are designed for musicians who are technically minded, but not so much for programmers. As a result, you can build this sort of thing quickly and efficiently.
You also have some lower level options: PortAudio can handle your audio I/O, you would have to do the sound generation and effects and so on on your own or with other libraries. Cinder and OpenFramewoks both provide interfaces for audio, cameras, and other stuff for "creative programming". I'm afraid I don't know if they meet your full requirements, but they are powerful and popular for this sort of thing so I encorage you to look at them.
The two major ones these days tend to be
WWise
WWise Download Link
FMOD
FMOD Download Link
These two engines may even in fact be overkill for what you need, but I can almost guarantee that they will be capable of anything you require.

TI-99 speech effect?

I want to make a program that takes recorded speech and transforms it so it sounds like it's coming from a Texas TI-99. Do you have any good ideas and resources for how to go about that?
Most of those old speech synthesizers were build directly in-chip. Perhaps you could find a synthesizer that sounds like the chip, but if you really want the original sound, you would either have to simulate the chip (I don't know if it's a simple matter, perhaps the chip internals aren't published).
I only know because I burnt out a number of the Radio Shack speech synthesizer ICs before I managed to get a SP0256-AL2 working.
If you're more of a do-it yourself type guy, you need to find out which IC actually drove the speech synthesis in a TI-99, and then build the chip up on a bread board. That's what I was trying to do back then, and I managed to get the chip to speak, but lost patience after I fried my third chip due to a mis-wiring issue when I attempted to attach it to my PC's parallel port. I think this was the book I was using back then, but there's no cover art featured so it's hard to know for sure.
If you are familiar with how to use ROM images, there seems to be a gentleman that has managed to refeverse engineer the ROM image out of a SP0256-AL2. Look here for the image and the incredible granted permission to do the work and distribute the results.
You could start with open source that does something similar: Adding Robotic/Vocoder effect to your song using Audacity

Major game components

I am in the process of developping a game, and after two months of work (not full time mind you), I have come to realise that our specs for the game are lacking a lot of details. I am not a professional game developper, this is only a hobby.
What I would like to receive help or advices for is this: What are the major components that you find in games, that have to be developped or already exists as librairies? The objective of this question is for me to be able to specify more game aspects.
Currently, we had specified pretty much only how we would work on the visual, completely forgetting everything about game logic (AI, Entities interactions, Quest logic (how do we decide whether or not a quest is completed)).
So far, I have found those points:
Physics (collision detection, actual forces, etc.)
AI (pathfinding, objectives, etc.)
Model management
Animation management
Scene management
Combat management
Inventory management
Camera (make sure not to render everything that is in the scene)
Heightmaps
Entities communication (Player with NPC, enemy, other players, etc)
Game state
Game state save system
In order to reduce the scope of this queston, I'd like it if you could specifically discuss aspects related to developping an RPG type of game. I will also point out that I am using XNA to develop this game, but I have almost no grasp of all the classes available yet (pretty much only using the Game component with some classes that are related to it such as GameTime, SpriteBatch, GraphicDeviceManager) but not much more.
You have a decent list, but you are missing storage (save load), text (text is important in RPGs : Unicode, font rendering), probably a macro system for text (something that replaces tokens like {player} with the player characters name), and most important of all content generation tools (map editor, chara editor, dialog editor) because RPGs need content (or auto generation tools if you need ). By the way have links to your work?
I do this exact stuff for a living so if you need more pointers perhaps I can help.
I don't know if this is any help, but I have been reading articles from http://www.gamasutra.com/ for many years.
I don't have a perfect set of tools from the beginning, but your list covers most of the usual trouble for RUNNING the game. But have you found out what each one of the items stands for? How much have you made already? "Inventory Management" sounds very heavy, but some games just need a simple "array" of objects. Takes an hour to program + some graphical integration (if you have your GUI Management done already).
How to start planning
When I develop games in my spare time, I usually get an idea because another game lacks this function/option. Then I start up what ever development tool I am currently using and try to see if I can make a prototype showing this idea. It's not always about fancy graphics, but most often it's more about finding out how to solve a certain problem. Green and red boxes will help you most of the way, but otherwise, use Google Images and do a quick search for prototype graphics. But remember that these images are probably copyrighted, so only use them for internal test purposes and to explain to your graphic artists what type of game/graphic you want to make.
Secondly, you'll find that you need to find/build tools to create the "maps/missions/quests" too. Today many develop their own "object script" where they can easily add new content/path to a game.
Many of the ideas we (my friends and I) have been testing started with a certain prototype of the interface, to see if its possible to generate that sort of screen output first. Then we build a quick'n'dirty map/level-editor that can supply us with test maps.
No game logic at this point, still figuring out if the game-engine in general is running.
My first game-algorithm problem
Back when I was in my teens I had a Commodore 64 and I was wondering, how do they sort 10 numbers in order for a Highscore? It took me quite a while to find a "scalable" way of doing this, but I learned a lot about programming too.
The second problem I found
How do I make a tank/cannon fire a bullet in the correct direction when I fly my helicopter around the screen?
I sat down and drew quick sketches of the actual problem, looked at the bullet lines, tried some theories of my own and found something that seemed to be working (by dividing and multiplying positions etc.) later on in school I discovered this to be more or less Pythagoras. LOL!
Years and many game attempts later
I played "Dune" and the later C&C + the new game Warcraft (v1/v2) - I remember it started to annoyed me how the lame AI worked. The path finding algorithms were frustrating for the player, I thought. They moved in direction of target position and then found a wall, but if the way was to complex, the object just stopped. Argh!
So I first sat with large amounts of paper, then I tried to draw certain scenarios where an "object" (tank/ork/soldier) would go from A to B and then suddenly there was a "structure" (building/other object) in the pathway - what then?
I learned about A-star pathfinding (after solving it first on my own in a similar way, then later reading about the reason for this working). A very "cpu heavy" way of finding a path, but I learned a lot from the process of "cracking this nut". These thoughts have helped me a lot developing other game algortimes over time.
So what I am saying is: I think you'll have to think more of:
How is the game to be played?
What does the user experience look like?
Why would the user want to come back to the game?
What requirements are needed? Broadband? 19" monitor with 1280x1024?
An RPG, yes - but will it be multi-user or single?
Do we need a fast network/server setup or do we need to develop a strong AI for the NPCs?
And much more...
I am not sure this is what you asked for, but I hope you can use it somehow?
There are hundreds of components needed to make a game, from time management to audio. You'll probably need to roll your own GUI, as native OS controls are very non-gamey. You will probably also need all kinds of tools to generate your worlds, exporters to convert models and textures into something suitable for your game etc.
I would strongly recommend that you start with one of the many free or cheap game engines that are out there. Loads of them come with the source code, so you can learn how they have been put together as you go.
When you think you are ready, you can start to replace parts of the engine you are using to better suit your needs.
I agree with Robert Gould's post , especially about tools and I'd also add
Scripting
Memory Management
Network - especially replication of game object states and match-making
oh and don't forget Localisation - particularly for text strings
Effects and effect timers (could be magical effects, could just be stuff like being stunned.)
Character professions, skills, spells (if that kind of game).
World creation tools, to make it easy for non-programmer builders.
Think about whether or not you want PvP. If so, you need to really think about how you're going to do your combat system and any limits you want on who can attack whom.
Equipment, "treasure", values of things and how you want to do the economy.
This is an older question, but IMHO now there is a better answer: use Unity (or something akin to it). It gives you 90% of what you need to make a game up front, so you can jump in and focus directly on the part you care about, which is the gameplay. When you run aground because there's something it doesn't do out of the box, you can usually find a resource in the Asset Store for free or cheap that will save you a lot of work.
I would also add that if you're not working on your game full-time, be mindful of the complexity and the time-frame of the task. If you'll try to integrate so many different frameworks into your RPG game, you can easily end up with several years worth of work; maybe it would be more advisable to start small and only develop the "core" of your game first and not bother about physics, for example. You could still add it in the second version.

Resources