MKMapView showsUserLocation and load custom annotations - mkmapview

We are using MKMapView with showsUserLocation = YES so that we see a blue dot and accuracy ring. We've also implemented mapView:didUpdateUserLocation: to capture updates. From this method we get the users location and use that to make a web service call, the results of which end up as map annotations.
The problem
As long as showsUserLocation = YES the method mapView:didUpdateUserLocation: is periodically called. We only need to get the users location at specific times, e.g. viewDidAppear or when the user touches a button. If we set showsUserLocation to NO after the first update then we lose the blue dot which we'd like to keep.
Ideas
One idea is to check the MKUserLocation value received by mapView:didUpdateUserLocation: against the previous value to see if there has been a change, if there has then do a check to see how much of a change before deciding to load fresh data.
Another idea is to use CLLocationManager and to manually place a user pin on the map, the issue with this one is how to simulate the blue circle and accuracy ring.
Anyone know any examples? Or have thoughts on how to tackle this?
Thanks

The simplest idea is simply to ignore all callbacks after the first good one that you get. Or your first idea to check and see how far the new location is from the previous is a good one. Simulating the blue dot and the pulsating accuracy ring is much more complicated.

Approach taken was to use the first idea. As no further activity/answers, and given age of the question this will be accepted answer.

Related

How do I select walls along a particular axis, among all the other walls, in Revit using the Revit API?

I want to change the height of all walls but the length of walls only in a particular axis, for instance, along the x-axis.
Consecutively, could you also tell how I could alter the similar dimensions for a house? Where there are connected walls?
I see nothing in this code that means it does not work.
However, it seems to me that it does not make much sense.
One would seldom constrain all wall heights to be user defined to a certain value; instead, in most Revit models, walls are constrained to reach from a bottom level to a top level. Then, if the height of all walls needs to be modified, you would modify the elevation of the top level only.
The logic of the code guarantees that the wall location line will only be modified if the newWallLine equals XYZ.BasisX. This may never be the case, since the line is a Line object and the vector an XYZ.
I would recommend researching exactly what you wish you achieve and how to do so manually in the end user interface before addressing the task programmatically.
In general, if a feature is not available in the Revit product manually through the user interface, then the Revit API will not provide it either.
You should therefore research the optimal workflow and best practices to address your task at hand manually through the user interface first.
To do so, please discuss and analyse it with an experienced application engineer, product usage expert, or product support.
Once you have got that part sorted out, it is time to step up into the programming environment.
I hope this clarifies.

MeshLab: fill cracks in mesh

I'm having trouble finding a way to solve this specific problem using MeshLab.
As you can see in the figure, the mesh with which I'm working presents some cracks in certain areas, and I would like to try to close them. The "close holes" option does not seem to work because, being technically cracks and not holes, it seems not to be able to weld them.
I managed to get a good result using the "Screened Poisson Surface Reconstruction" option, but using this operation (rebuilding the whole mesh topology), I would lose all the information about the mesh's UVs (and I can not afford to lose them).
I would need some advice to find the best method to weld these cracks, which does not change the vertices that are not along them, adding only the geometry needed to close the mesh (or, ideally, to make a weld using the existing edges along the edge).
Thanks in advance!
As answered by A.Comer in a comment to the main question, I was able to get the desired result simply by playing a bit with the parameters of the "close holes" tool.
Just for the sake of completeness, here is a copy of the comment:
The close holes option should be able to handle this. Did you try changing the max size for that filter to a much larger number? Do filters >> selection >> select border and put the number of selected faces as the max size into that filter – A.Comer

Washing machine petri net

It is my first time doing a Petri net, and I want to model a washing machine. I have started and it looks like this so far:
Do you have any corrections or help? I obviously know its not correct, but I am a beginner and not aware of the mistakes you guys might see. Thanks in advance.
First comments on your net's way of working:
there is no arrow back to the off state. So once you switch on your washing machine, won't you never be able to switch it off again ?
drain and dry both conduct back to idle. But when idle has a token, it will either go to delicate or to T1. The conditions ("program" chosen by the operator) don't vanish, so they would be triggered again and again.
Considering the last point, I'd suggest to have a different idle for the end of the program to avoid this cycling. If you have to pass several times through the same state but take different actions depending on the progress, you have to work with more tokens.
Some remarks about the net's form:
you don't need to put the 1 on every arc. You could make this more readable by Leaving the 1 out and indicating a number on an arc, only when more than one tokens would be needed.
usually, the transitions are not aligned with the arcs (although nothing forbids is) but rather perpendicular to the flow (here, horizontal)
In principle, "places" (nodes) represent states or resources, and "transitions" (rectangles) represent an event that changes the state (or an action that consumes resources). Your naming convention should better reflect this
Apparently you're missing some condition to stop the process. Now once you start your washing will continue in an endless loop.
I think it would be nice to leave the transition graphics unshaded or unfilled if it is not enabled. Personally I fill it green if it is enabled.
If you want someone to check if you modeled a logic properly in your Petri Net then it would be nice if you include a description of your system logic in prose.

Unreal Development Kit Portals/Teleporters

I'm trying to use UTTeleporterCustomMeshes in UDK in a way that is similar to the behavior in the game Portal (the actual portal itself, not the portal gun).
That is:
You can shoot through them as well as walk through them (DONE)
It appears as if you can see through the portal as if it were a window (DONE)
You can stand half way through a portal and be at both sides simultaneously
Shadows and light will also carry through the teleporter
My problem is that I can't seem to find any detailed documentation for UTTeleporters.
Do the teleporters have some built in functionality for this kind of stuff or do I need to implement it all myself?
Ie, create a custom camera to capture the scene to be rendered onto my teleporter and create my own custom teleporter script?
I'm just after SOME sort of direction or ideas ast to how I could achieve this.
Cheers
EDIT:
I've since managed to get steps 1 and 2 working (UTPortal did the trick), though 3 and 4 still remain a challenge.
My first mistake was trying to use UTTeleporterCustomMeshes.
No matter where I went through, it would always spit me out at the middle of the target teleporter. Plus I'd have to do something like #Sebastian said with the TextureRenderTarget.
I still have a long way to go before it starts to look seamless!!
Then, onto stage 2!
Unreal does as far as I know does not support that type of behavior.
You can however build it yourself.
I would look at the TextureRenderTarget classes for rendering the material on the portal.
Standing in the portal with one foot on each "side" will be... very tricky to do but shooting and walking through them should be simple enough by extending the touch event in actor.

Getting data from objects that collide

I'm currently designing a game using Cocos2d. There's no code yet, as I'm still developing my ideas. But, I've run across a question I can't answer and want to know if I'm just missing something or what? Here's what I'm currently thinking:
I am "dropping" multiple blocks from the top of the screen and they move down the screen in random directions. They will eventually settle at the bottom of the screen and stack up one on top of the other. Eventually, while falling, some blocks are going to collide with others. When two blocks collide I want to test to see if certain characteristics of each block are equal (e.g. size, color, orientation, etc.). Each block is it's own object, will handle it's own movement and collision detection, and will have accessor methods for size, color, orientation, etc.
Here's my question:
Am I correct in thinking that each block is a separate unit in itself and doesn't know anything about the other blocks? Block A, for instance, collides with Block B and only knows that it collided with something, but doesn't know it was another block? If this is so, then how do I do a proper comparison? How do I tell which block has collided with which block and get access to each block's data and where do I do the comparison? In the layer?
I'd love to be pointed in a decent direction here. I'm not really sure if what I'm wanting to do is even doable? Any suggestions?
You could use a physics engine that usually comes along with cocos2d- either chipmunk or box2d. The physics engines will take care of collisions for you, and if you implement collision callbacks then you can know when two objects hit each other. You can then check the characteristics of each object and react accordingly. This tutorial on Chipmunk and cocos2d integration might be helpful.

Resources