How can I represent a road system in software? - modeling

I'm looking to do some traffic simulation as a side project but I'm having trouble coming up with ideas for how I should represent the road itself. I like the idea of using a series of waypoints (straight lines using lat/long coords) but it seems difficult to represent different lanes of traffic using this method. I've also been looking at some of the other traffic-simulation questions and one of them mentions using a bitmap but I'm having trouble deciding how this would allow me to easily assign real world lengths to road segments and lane widths, etc. Does anyone have any helpful hints or other ideas that would allow a car to exist at a specific point on a road and be able to switch lanes, etc?

I would start with a grid of connected nodes. A node would represent a change in the road condition like a crossing, a beginning or ending lane, a widening of the road itself etc. Either you do complex connections that store all information (lanes in both directions? how many lanes per direction? lane properties etc) or you save one connection for each lane. To be sure that 2 connections on different sides of a node are related to the same lane, you can use lane-ids on a per node base.
This way you have a graph you can run calculations on and you have all data to visualize the whole net.

It really depends on what you want to do with your model, so it's hard to come up with "the correct" answer here.
If you want to model congestions, you might not need a network at all. You can simulate that on a circular road.
Any do you really need the concept of lanes? If you do, you could just model them as separate lines between nodes, or maybe it's sufficient to just store the number of lanes per road.
Anyway, what I'm getting at is that you should first think a bit deeper about what you want to achieve before you start thinking about the exact data model.

In a previous job, I was the lead developer on a driving simulator, in particular road network modelling. I built what I called the Logical Road Network which was an abstract description of the road network that is used for tracking vehicles along the road.
A lane was simply a path that followed the road but was offset by a positive or negative distance from the central path. Each road was either a straight or curved section and was essentially a path of central vertices with one or more offset lanes either side. The autonomous cars could then follow a lane path.
In short, the polygons that made up the road were built around the central path along the road, e.g.
*------*------*
|\ |\ |
| \ | \ |
| \ | \ |
| \ | \ |
| \ | \ |
| \| \|
*------*------*
where * is a vertex, creating 4 polygons for this simple straight road segment.
Interpolation between 2 vertices along a path provided a simple way to move a vehicle in a given direction. On top of this simple path, we then introduced some fuzziness for the autonomous vehicles so that small deviations in the path emerged (creating more realistic traffic). Logically, vehicles were added to and removed from a road segment and vehicles could inspect the segment to see other vehicles in front, behind or on a different lane. This allowed some degree of AI within each vehicle, so that they could slow down behind another vehicle or wait for oncoming traffic to pass before making a turn.
Not sure if this is exactly what you are after, but I hope it helps nonetheless :-)

Related

Detecting damaged car parts

I am trying to build a system that on providing an image of a car can assess the damage percentage of it and also find out which parts are damaged in the car.
Is there any possible way to do this using Python and open-cv or tensorflow ?
The GitHub repositories I found that were relevant to my work are these
https://github.com/VakhoQ/damage-car-detector/tree/master/DamageCarDetector
https://github.com/neokt/car-damage-detective
But what they provide is a qualitative output( like they say the car damage is high or low), I wanted to print out a quantitative output( percentage of damage ) along with the individual part names which are damaged
Is this possible ?
If so please help me out.
Thank you.
To extend the good answers given by #yves-daoust: It is not a trivial task and you should not try to do it at once with one single approach.
You should question yourself how a human with a comparable task, i.e. say an expert who reviews these cars after a leasing contract, proceeds with this. Then you have to formulate requirements and also restrictions for your system.
For instance, an expert first checks for any visual occurences and rates these, then they may check technical issues which may well be hidden from optical sensors (i.e. if the car is drivable, driving a round and estimate if the engine is running smoothly, the steering geometry is aligned (i.e. if the car manages to stay in line), if there are any minor vibrations which should not be there and so on) and they may also apply force (trying to manually shake the wheels to check if the bearings are ok).
If you define your measurement system as restricted to just a normal camera sensor, you are somewhat limited within to what extend your system is able to deliver.
If you just want to spot cosmetic damages, i.e. classification of scratches in paint and rims, I'd say a state of the art machine vision application should be able to help you to some extent:
First you'd need to detect the scratches. Bear in mind that visibility of scratches, especially in the field with changing conditions (sunlight) may be a very hard to impossible task for a cheap sensor. I.e. to cope with reflections a system might need to make use of polarizing filters, special effect paints may interfere with your optical system in a way you are not able to spot anything.
Secondly, after you detect the position and dimension of these scratches in the camera coordinates, you need to transform them into real world coordinates for getting to know the real dimensions of these scratches. It would also be of great use to know the exact location of the scratch on the car (which would require a digital twin of the car - which is not to be trivially done anymore).
After determining the extent of the scratch and its position on the car, you need to apply a cost model. Because some car parts are easily fixable, say a scratch in the bumper, just respray the bumper, but scratch in the C-Pillar easily is a repaint for the whole back quarter if it should not be noticeable anymore.
Same goes with bigger scratches / cracks: The optical detection model needs to be able to distinguish between scratches and cracks (which is very hard to do, just by looking at it) and then the cost model can infer the cost i.e. if a bumper needs just respray or needs complete replacement (because it is cracked and not just scratched). This cost model may seem to be easy but bear in mind this needs to be adopted to every car you "scan". Because one cheap damage for the one car body might be a very hard to fix damage for a different car body. I'd say this might even be harder than to spot the inital scratches because you'd need to obtain the construction plans/repair part lists (the repair handbooks / repair part lists are mostly accessible if you are a registered mechanic but they might cost licensing fees) of any vehicle you want to quote.
You see, this is a very complex problem which is composed of multiple hard sub-problems. The easiest or probably the best way to do this would be to do a bottom up approach, i.e. starting with a simple "scratch detector" which just spots scratches in paint. Then go from there and you easily see what is possible and what is not

Using FRP to model road network with jams

I am currently trying to understand arrows and FRP, and I came upon a question, which I cannot seem to map to FRP, namely how to model a road network.
I thought I could model a road network as Arrows, where each Arrow represents a road segment. It accepts streams of cars at locations and times and produces the same type, albeit with different locations and times.
So far so good. But this model does not take into account, that segments may get jammed. While each segment could well respond to heavy traffic and delay cars more and more, the more congested it gets, there would be no backwater effect, i.e. the jam would not propagate backwards to other road segments.
I suspect I am applying too much OO thinking here, instead of focusing on what needs to be computed, but I cannot get it right in my head.
How can I model a road network with Arrows such that backwater effects are taken into account?
The problem is that in arrows and in FRP the flow of information is in general unidirectional. Think of a FRP arrow like a piece of digital circuit. The output of a circuit element doesn't depend on what's connected to it - it just "offers" the output to whoever is interested. This is also described visually in Primitive signal functions in the Yampa overview:
Your situation is different. The state of a segment of a road depends on both the next and previous segments - cars are comming from the previous one, but if cars can't leave to the next one, they have to stay. It's just like a pipe with running water. If you close the pipe at its end, water stops, and the information about that propagates backwards through the pipe at the speed of sound in water.
So each road segment will need to have 2 inputs: One saying let's say how many cars can the following segment accept, and how many cars are coming from the previous segment (which should always be less or equal to the number of cars the segment can accept at the moment). This means that the FRP signal flow will be actually circular. For this you'll need loops, shown in the last image in the above diagram, which are captured by ArrowLoop type-class. Most likely you'll have a custom binding function for road segments that'll be internally creating the required loops. Note that there must be a time delay in a loop, to prevent it from diverging, which makes sense as it takes some time for cars to go from one segment to another.
(I'll perhaps expand the answer with an example, if I'll have more time.)

Detecting Handedness from Device Use

Is there any body of evidence that we could reference to help determine whether a person is using a device (smartphone/tablet) with their left hand or right hand?
My hunch is that you may be able to use accelerometer data to detect a slight tilt, perhaps only while the user is manipulating some sort of on screen input.
The answer I'm looking for would state something like, "research shows that 90% of right handed users that utilize an input mechanism tilt their phone an average of 5° while inputting data, while 90% of left handed users utilizing an input mechanism have their phone tilted an average of -5°".
Having this data, one would be able to read accelerometer data and be able to make informed decisions regarding placement of on screen items that might otherwise be in the way for left handed users or right handed users.
You can definitely do this but if it were me, I'd try a less complicated approach. First you need to recognize that not any specific approach will yield 100% accurate results - they will be guesses but hopefully highly probable ones. With that said, I'd explore the simple-to-capture data points of basic touch events. You can leverage these data points and pull x/y axis on start/end touch:
touchStart: Triggers when the user makes contact with the touch
surface and creates a touch point inside the element the event is
bound to.
touchEnd: Triggers when the user removes a touch point from the
surface.
Here's one way to do it - it could be reasoned that if a user is left handed, they will use their left thumb to scroll up/down on the page. Now, based on the way the thumb rotates, swiping up will naturally cause the arch of the swipe to move outwards. In the case of touch events, if the touchStart X is greater than touchEnd X, you could deduce they are left handed. The opposite could be true with a right handed person - for a swipe up, if the touchStart X is less than touchEnd X, you could deduce they are right handed. See here:
Here's one reference on getting started with touch events. Good luck!
http://www.javascriptkit.com/javatutors/touchevents.shtml
There are multiple approaches and papers discussing this topic. However, most of them are written between 2012-2016. After doing some research myself I came across a fairly new article that makes use of deep learning.
What sparked my interest is the fact that they do not rely on a swipe direction, speed or position but rather on the capacitive image each finger creates during a touch.
Highly recommend reading the full paper: http://huyle.de/wp-content/papercite-data/pdf/le2019investigating.pdf
Whats even better, the data set together with Python 3.6 scripts to preprocess the data as well as train and test the model described in the paper are released under the MIT license. They also provide the trained models and the software to
run the models on Android.
Git repo: https://github.com/interactionlab/CapFingerId

Detecting presence (arrival/departure) with active RFID tags

Actually arrival is pretty simple, tag gets into a range of receivers antenna, but the departure is what is causing the problems.
First some information about the setup we have.
Tags:
They work at 433Mhz, every 1.5 seconds they transmit a "heartbeat", on movement they go into a transmission burst mode which lasts for as long as they are moving.
They transmit their ID, transmission sequence number(1 to 255, repeating over and over), for how long they have been in use, and input from motion sensor, if any. We have no control over them whatsoever. They will continue doing what they do until their battery dies. And they are sealed shut.
Receiver forwards all that data + signal strength of a tag to our software. Software can work with several receivers. Currently we are using omnidirectional antennas.
How can we be sure that the tag has departed from premises?
Problems:
Sometimes two or more tags transmit "heartbeat" at the same time and no signal is received. With number of tags increasing these collisions happen more often, this problem is solved by tags randomly changing their heartbeat rate (in several milliseconds) to avoid collisions. Problem is I can't rely on tags not "checking in" for a certain period of time as sign of departure. It could be timeout because of collisions. Because of these collisions we cannot rely that every "heartbeat" will be received.
Tag manufacturer advised that we use two receivers and set them up as a gate for tags to pass through. Based on the order of tags passing through "gates" we can tell in which direction they are going. The problem with our omnidirectional antennas is that sometimes tag signal bounces of building and then arrives to receiver. So based on signal strength it looks like its farther away then it is.
Does anybody have a solution of what we can do to have a reliable way of determining if tags are coming or leaving? Also we can setup antennas in different way as well.
I wrote the software that interprets data from receivers, so that part can be manipulated in any way. But I'm out of ideas of how to interpret information to get reliability we need.
Right now the only idea is to try out with directional antennas? But I would like to tryout all the options with the current equipment we have.
Also any literature suggestion that deals with active RFID tags is more than welcome, most of books I've found deal with passive tag solutions.
As a top level statement, if you need to track items leaving your site, your RFID technology is probably the wrong one. The technology you have is better suited to the positional tracking tags within a large area - eg a factory floor. Notwithstanding the above, here is my take:
A good approach to active RFID is to break your area down into zones that are tied to your business processes, for example:
Warehouse
Loading bay
Packing
Entry of a tag into a zone represents the start of a new process or perhaps the end of a process the tag is currently in. For example, moving from warehouse to the packing represents assembling a shipment, and movement into the loading bay initiates a shipment.
The crux of many RFID implementations is the installation and configuration of the RFID intrastructure to:
Map tag -> asset (which you have done)
Map tag read -> zone (and by inference asset -> zone)
Map movements between zones to steps in a business processes (and therefore understand when an asset leaves the site, your goal)
There are a number of considerations: the physical characteristics of 433MHz signals, position of antennae, sensitivity of antennae and some tricks that some vendors have. After an optimal site configuration, then you may need to have some processing tricks on the tag reads that will pour in.
Dirty data
Always keep in mind that tag read data is dirty - that RF interference (from unshielded motors, electric wiring, etc), weather conditions and physical manipulation of tags (eg covering with metal) happen all the time.
RSSI's are like stock tickers - there is a lot of random/microeconomic noise on top of broad macroeconomic trends. To interpret movement, compute the linear regression of groups of reads rather then rely on a specific read's RSSI.
If you do see a tag broadcasting with a high RSSI, which then falls to medium then low and then disappears, you really can interpret that as the tag is leaving the range of the receiver. Is that off-site? Well, you need to consider the site's layout (the zones) and the positioning of receivers within the zones.
TriangulationTrilateration
EDIT I had incorrectly used the term 'triangulation'. This refers to determining the position of something by known the angle it subtends from two or three known locations. In RFID, you use the distance and as such it is called 'trilateration'.
In my experience, vendors selling the tag technology you describe have server software that determines the absolute position of the tags using the received RSSI. You should be able to obtain the position of the tag within 1-10m using such software. Determining if the tag is moving off-site is then easy.
To code this yourself:
First, each tag is pinging away when moving. These pings hit the receivers at almost the same time and sent to the server. However the messages can sometimes arrive out of order or interleaved with earlier and later reads from other receivers. To help correlate pings, the ping contains a sequence number. You are looking for tag reads from the same tag, with the same sequence number, received by three (or more) receivers. If more than three, pick the three with the largest RSSI.
The distance is approximated from RSSI. This is not linear and subject to non-trivial random variation. A quick google turns up:
Given three approximate distances from three known points (the receivers' locations), you can then resolve the approximate position of the tag using Trilateration using 3 latitude and longitude points, and 3 distances.
Now you have the absolute position of the tag. You can use these positions to track the absolute movement of the tag.
To make this useful, you should position receivers so that you can reliably detect tags right up to the physical site boundaries. You should then determine a 'geofence' around your site, within receiver range. I would write a business rule that states:
If the last known position of a tag was outside the geofence, and
A tag read from the tag has not been detected in (say) 10s, then
Declare the tag has left the site.
By using the trilateration and geofence, you can focus the business logic on only those tags close to going awol. If you fail to receive your 1.5s ping only a few times from such a tag, it's highly likely that the tag has gone outside your receiver's range, and therefore off-site.
You're already aware that tag reads can sometimes come from reflections. If you have a lot of these, then your trilateration will be pretty poor. So this method works best when there are fairly large open spaces and minimal reflectors.
Some RFID vendors have all this built into their servers - processing this by writing your own code is (clearly) non-trivial.
Zone design using wide-area receivers
Logical design of zones can help the business logic layer. For example, suppose you have two zones (A and B) with two receivers (1 and 2):
A B
+----------+----------+
| | |
| 1 | 2 |
| | |
+----------+----------+
If you get tag reads from the tag at receiver 1, then one at receiver 2, how do you interpret that? Did tag T move into zone B, or just get a read at the extreme range of 2?
If you get a later read at 1, did the tag move back, or did it never move?
A better physical solution is:
A B
+----------+----------+
| | |
| 1 2 3 |
| | |
+----------+----------+
In this approach, a tag moving from A to B would get reads from the following receivers:
1 1 1 2 1 2 2 3 2 2 3 2 3 3 3 3 3
-------> time
From a programming logic point of view, a movement from A -> B has to traverse reads 1 -> 2 -> 3 (even though there is a lot of jitter). It gets even easier to interpret when you combine with RSSI.
Portal design with directional receivers
You can create quite a good portal using two directional receivers (you will need to spend some time configuring the antenna and sensitivity carefully). Mount a receiver well above the door on both sides. Below is a schematic from the side. R1 and R2 are the receivers (and the rough read field is shown), and on the left is a worker pushing an asset through the door:
----> direction of motion
-------------------+----------------
R1 | R2
/ \ | / \
o / \ / \
|-++ / \ / \
|\++ / \ / \
------------------------------------------
You should get a pattern of reads like this:
<nothing> 1 1 1 1 1 12 1 21 2 12 2 1 2 2 2 2 2 <nothing>
-------> time
This indicates a movement from receiver 1 to receiver 2.
"Signposts"
Savi implementations often use "sign posts" to assist with location. The sign post emits beam that illuminates a small area (like a doorway) in a 123KHz beam. The signpost also transmits a unique number identifying itself (left door might be 1, while the right door might be 2). When the tag passes through the beam, it wakes up and re-broadcasts the number. The reader now knows which door the tag passed through.
Watch out for any metal in the surrounding area. 123KHz travels extremely well down rebar in concrete walls, metal fences and rail tracks. We once had tags reporting themselves hundreds of meters from a signpost due to such effects.
With this approach you can implement a portal much like you would for passive.
Simulating signposts
If you don't have the ability to use signposts, then there is a dirty hack:
Stick a passive RFID tag to your active RFID tag
Install a passive RFID reader on each doorway
Passive RFID is actually very good in restricted spaces, so this implementation can work very well. This solution may be the same cost (or cheaper) than with your active RFID vendor.
If you're clever, you can use the EPC GIAI namespace for the passive tag ID and so burn it with the active tag ID. Both active and passive tags would then be identically named.
Physical considerations
433MHz tags have some interesting characteristics. Well-constructed receivers can get a read of tags within about 100m, which is a long way for RFID. In addition, 433MHz wraps itself around obstacles very well, especially metal ones. We could even read tags in the boot (trunk) of a car travelling at 50km/h - the signal propagates from the rubber seal.
When installing a reader to monitor a zone, you need to adjust its location and sensitivity very carefully to maximize the reads from tags within your zone, but also to minimize reads from outside your zone. This might be done in HW or in SW configuration (like dropping all reads below a particular RSSI).
One idea might be to move the receiver away from the area where your tags are exiting as in the layout below (R is the reader):
+-------------------------+-----------+
| Warehouse | Exit |
| . |
| .
| R . R --->
| .
| . |
| | |
+-------------------------+-----------+
It pays to do a RF site survey and spend enough time to properly understand how tags and readers work in an area. Getting the physical installation right is critical.
Other thing to do is to consider physical constrictions such as corridors and doorways and treat them as choke-points - map logical zones to them. Put a reader (with directional receiver tuned to cover the constriction) and lower sensitivity in to cover the constriction.
What no tag-reads actually means
If my experience of RFID has taught me anything, it is that you can get spurious reads at any time, and you need to treat everything with a degree of suspicion. For example, you might have a few seconds of missing reads from a given tag - this can mean anything:
A user accidentally putting a metal tin over the tag
A fork lift truck getting between tag and reader
An RF collision
A momentary network congestion
The battery dying or fading out (remember to check the low-battery flag in tag reads and ensure the business has a process to replace old tags).
Tag destroyed by a pallet being pushed into it
Stollen by someone wanting to resell it for scrap (Not a joke - this actually happened)
Oh yeah, it may be that the tag moved off-site.
If the tag has not been heard of in, say, 5 minutes, odds are that it's off site.
In most business processes that you would use this active tag technology for, a short delay before the system decides the tag is off-site is acceptable.
Conclusions
Site survey: spend time experimenting with readers in different locations. Walk around the site with a tag and see what reads you are actually getting. Use this to:
Logically segment your site into zones and locate receivers to most accurately position tags in zones
It's easier to determine movement between zones using several receivers; if possible, instrument physical constrictions such as doors and corridors as portals. As part of your RFID implementation, you might even want to install new walls or fences to create such constrictions. Consider a passive RFID for portals.
Beware of metal, especially large expanses of it.
You have dirty data. You need to compute linear regressions on the RSSIs to spot trends over short periods; you need to be able to forgive a small number of missing tag reads
Make sure that there are business processes to handle dying batteries and sudden disappearances of tags.
Above all, this problem is best solved by getting the receivers installed in the best locations and configuring them carefully, then getting the software right. Trying to solve a bad site installation with software can cause premature ageing.
Disclosure: I worked 8 years for a major active RFID vendor.
Using directional antennas sounds like it may be a more reliable option, although this obviously depends on the precise layout of your premises.
As far as using your current omnidirectional receivers, there are a couple of options I can think of:
First one, and likely easiest, would be to collect some data on the average 'check-in' times you are seeing for on-site tags, possibly as a function of the number of on-site tags (if the number is likely to change dramatically - as your collision frequency will be related to the number of tags present). You can then analyse this data to see if you can choose a suitable cut-off time, after which you declare that a tag is no longer present.. Obviously exactly what cut-off you choose will depend on the data you see and your willingness to accept false positives - it could also be that any acceptable cut-off time lies outside your 3 minute window (although I suspect that if that is the case then your 3 minute window may not be viable).
Another, more difficult, option (or group of options more like), would be to utilise more historical information about each tag - for instance, look for tags whose signal strength gradually decreases and then disappears, or tags whose check-in time changes drastically, or perhaps utilise multiple receivers and look for patterns between receivers - such as tags which are only seen by one receiver and then disappear, or distinctive patterns of signal strength (indicating bearing) between receivers as tags go off-site.
Obviously the second option is really about looking for patterns, both over time and between receivers, and is likely to be much more labour (and analysis) intensive to implement. If you are able to capture enough good quality data you might be able to utilise machine-learning algorithms to identify relevant patterns.
We do this every day.
First question is: "How many tags do you have at a reader at any given time?". Collisions are more rare than you might think, but they do happen and tag over-population can be easily determined.
Our Software was written and might be using the same readers and tags that you are using. We set reader timeouts to determine when a tag is "away" or "offsite"; usually 30 seconds without the tag being read. Arrival of course is instantaneous when a tag is detected at the reader, then the tag is flagged "onsite".
We also have the option to use multiple readers; one at a gate and another on the parking lot or in the building for example. The gate reader has a short timeout. If a tag passes the gate reader, it is red and then times out very quickly to flag the tag as "offsite". If a tag is then read by any other reader, the tag is then considered "onsite".
I can post links if you think it would be helpful, else you can search for RFID Track. It's iOS App and there are settings posted for a demo server.
Peter

What is the best approach to compute efficiently the first intersection between a viewing ray and a set of objects?

For instance:
An approach to compute efficiently the first intersection between a viewing ray and a set of three objects: one sphere, one cone and one cylinder (other 3D primitives).
What you're looking for is a spatial partitioning scheme. There are a lot of options for dealing with this, and lots of research spent in this area as well. A good read would be Christer Ericsson's Real-Time Collision Detection.
One easy approach covered in that book would be to define a grid, assign all objects to all cells it intersects, and walk along the grid cells intersecting the line, front to back, intersecting with each object associated with that grid cell. Keep in mind that an object might be associated with more grid-cells, so the intersection point computed might actually not be in the current cell, but actually later on.
The next question would be how you define that grid. Unfortunately, there's no one good answer, and you need to consider what approach might fit your scenario best.
Other partitioning schemes of interest are different tree structures, such as kd-, Oct- and BSP-trees. You could even consider using trees combined with a grid.
EDIT
As pointed out, if your set is actually these three objects, you're definately better of just intersecting each one, and just pick the earliest one. If you're looking for ray-sphere, ray-cylinder, etc, intersection tests, these are not really hard and a quick google should supply all the math you might possibly need. :)
"computationally efficient" depends on how large the set is.
For a trivial set of three, just test each of them in turn, it's really not worth trying to optimise.
For larger sets, look at data structures which divide space (e.g. KD-Trees). Whole chapters (and indeed whole books) are dedicated to this problem. My favourite reference book is An Introduction to Ray Tracing (ed. Andrew. S. Glassner)
Alternatively, if I've misread your question and you're actually asking for algorithms for ray-object intersections for specific types of object, see the same book!
Well, it depends on what you're really trying to do. If you'd like to produce a solution that is correct for almost every pixel in a simple scene, an extremely quick method is to pre-calculate "what's in front" for each pixel by pre-rendering all of the objects with a unique identifying color into a background item buffer using scan conversion (aka the z-buffer). This is sometimes referred to as an item buffer.
Using that pre-computation, you then know what will be visible for almost all rays that you'll be shooting into the scene. As a result, your ray-environment intersection problem is greatly simplified: each ray hits one specific object.
When I was doing this many years ago, I was producing real-time raytraced images of admittedly simple scenes. I haven't revisited that code in quite a while but I suspect that with modern compilers and graphics hardware, performance would be orders of magnitude better than I was seeing then.
PS: I first read about the item buffer idea when I was doing my literature search in the early 90s. I originally found it mentioned in (I believe) an ACM paper from the late 70s. Sadly, I don't have the source reference available but, in short, it's a very old idea and one that works really well on scan conversion hardware.
I assume you have a ray d = (dx,dy,dz), starting at o = (ox,oy,oz) and you are finding the parameter t such that the point of intersection p = o+d*t. (Like this page, which describes ray-plane intersection using P2-P1 for d, P1 for o and u for t)
The first question I would ask is "Do these objects intersect"?
If not then you can cheat a little and check for ray collisions in order. Since you have three objects that may or may not move per frame it pays to pre-calculate their distance from the camera (e.g. from their centre points). Test against each object in turn, by distance from the camera, from smallest to largest. Although the empty space is the most expensive part of the render now, this is more effective than just testing against all three and taking a minimum value. If your image is high res then this is especially efficient since you amortise the cost across the number of pixels.
Otherwise, test against all three and take a minimum value...
In other situations you may want to make a hybrid of the two methods. If you can test two of the objects in order then do so (e.g. a sphere and a cube moving down a cylindrical tunnel), but test the third and take a minimum value to find the final object.

Resources