I'm working on a project on Sencha Touch and NodeJS.
What I am trying to do now is get the device's moving speed. From what I have seen in Sencha Docs I'm supposed to use Ext.device.Geolocation in order to use the device's geolocation services instead of the browser's.
Here's the way I'm doing it...
Ext.device.Geolocation.getCurrentPosition({
success: function(position) {
alert(position.coords.speed);
alert(JSON.stringify(position))
},
failure: function() {
console.log('something went wrong!');
}
});
The position var in there gets this value when running from my iPhone..
{ busId: '186', position:'{"speed":null,"accuracy":81.15007979037921,
"altitudeAccuracy":10,"altitude":9.457816123962402,"longitude":-54.950113117326524,
"heading":null,"latitude":-34.93604986481545}'}
Which is missing the speed property, and I can't figure out why.
An even stranger thing happens though. If I access my app through an Android device, the position variable is empty. All I get is {}.
The app is running on localhost on my PC, and when I say access through my phone I mean I access to it with my PCs IP. I don't know if that is supposed to work ok or not though.
I have added the Cordova API, or tried to, in my projects folder, like this...
And included in my index.html like this...
<script type="text/javascript" src="./cordova-ios/CordovaLib/cordova.js"></script>
I have zero experience with Cordova so I have no idea what I'm doing here, what am I doing wrong?
According to the Sencha Touch 2 documentation, if the speed value returns null then the feature is unsupported on the device (http://docs.sencha.com/touch/2.3.1/#!/api/Ext.util.Geolocation). If it is supported then it should return at least 0.
If you don't plan on packaging the application for Android or iOS then there is no point in using Cordova. What I would suggest is writing your own getSpeed function which will check if the speed is not null or 0 and then use a fall back function.
For the fall back function, you'll need to be able to calculate the distance between two sets of GPS coordinates. There is a solution for that here: Calculate distance between 2 GPS coordinates
You will also need to track the time between the two GPS readings. The closer that you grab those readings, the more accurate your speed calculation will be. For instance, a 1 minute interval in between would be more accurate than a 10 minute interval since a vehicle or person could have stopped several times within that 10 minutes.
I would also make sure that you've set allowHighAccuracy to true on your Geolocation object. Also to prevent any cached GPS data, set maximum age to 0 in order to retrieve fresh data every time.
Related
I am trying to build an Android app that interfaces with the ESP32 using BLE. I am using the RxBluetoothKotlin library from Vincent Masselis for the Android side. For the ESP32 side, I am using the default Kolban libraries that are included in the Arduino IDE. My phone is a OnePlus 5T and my ESP32 is a MH ET Live ESP32DevKIT. My Android app can be found here, and my ESP32 program here.
The whole system works pretty much perfectly for me in terms of pure functionality. That is to say, every button does what it's supposed to do, and I get the exact behaviour I had expected to get. However, the communication itself is very slow. Around 200 bytes/second. My test button in the Android app requests a bunch of text data from the ESP32, and displays this in a dialog. It also lists a number which represents the time between request and reception in milliseconds. Using this, I get around 2 seconds for 440 bytes of data. When I send less data, the time decreases approximately linearly with data size. 40 bytes of data will take around 200ms, and 20 bytes or under typically takes less than 100ms.
This seems rather slow to me. From what I understand, I should be able to at least get a few kilobytes per second. I have tried to check the speed using nRF Connect, but I get the same 2 seconds timespan for my data transfer. This suggests that the problem is not in my app, since I also have it with a completely different app. I also put the code in my main loop inside of callbacks instead (which I probably should have done in the first place), but this didn't change things at all. I have tried taking the microcontroller and my phone to a few different locations, hoping to eliminate interference. I have tried to mess with BLEDevice::setPower and BLEDevice::setMTU, as well as setting RxBluetoothGatt.requestMtu(500) on the Android side. Everything so far seems to have had little to no effect. The only thing that did anything, was adding the line "pServer->updatePeerMTU(0,500);" in my loop during the connection phase. This caused the first 23 bytes of data to be repeated whenever I pressed the test button in my app, and made the data transfer take about 3 seconds. If I'm lucky, I can get maybe a bit under 1.8 seconds for 440 bytes, but this is a very small change when I'm expecting an order of magnitude of difference, and might even be down to pure chance rather than anything I did.
Does anyone have an idea of how to increase my transfer speed?
The data transmission speed is mainly influenced by the Bluetooth LE connection interval (between 7.5 ms and 4 seconds) and is negotiated between the master (central unit) and the peripheral device. The master establishes a connection with a parameter set and the peripheral can propose to change this parameter set. In the end, however, the central unit decides which parameter set is to be used.
But the Bluetooth connection interval cannot be changed by an Android applications directly, which normally act as the central role. Instead it can request a connection priority which is known to have an influence on the connection interval.
var xhr = new XMLHttpRequest();
xhr.open('GET', 'https://api.github.com/repos/vuejs/vue/issues');
xhr.send();
with above code, I can receive top 30 issues list of vue project. But if I want to get top 30 issues whose issue number is less then 8000, how can I do it?
in the github v3 api doc,there are just a feature that allow you get issues since a time point.
One way using API V3 would be to traverse through the issues and find those that you want. In any case the call to the Issues API returns issues in descending order of the date of creation. Which means you just need to traverse through the issues to find the ones having issue number lower than 8000.
In the particular case of vuejs/vue; You can increase the number of issues displayed per page to 100 and then find issues having number less than 8000 in the second page :
https://api.github.com/repos/vuejs/vue/issues?per_page=100&page=2
I feel this is a better option than using issue Search API (V3), since you do not have to deal with a very low rate limit of Github Search APIs.
So im using this Node module to connect to chef from my API.
https://github.com/normanjoyner/chef-api
The same contains a method called "partialSearch" which will fetch determined data for all nodes that match a given criteria. The problem I have, on of our environments have 1386 nodes attached to it, but it seems the module only returns 1000 as a maximum.
There does not seem to be any method to "offset" the results. This module works pretty well and its a shame this feature is not implemented since its lack really breaks the utility of such.
Does someone bumped into a similar issue with this module and can advise how to workaround it?
Here its an extract of my code :
chef.config(SetOptions(environment));
console.log("About to search for any servers ...");
chef.partialSearch('node',
{
q: "name:*"
},
{
name: ['name'] ,
'ipaddress': ['ipaddress'] ,
'chef_environment': ['chef_environment'] ,
'ip6address': ['ip6address'],
'run_list': ['run_list'],
'chef_client': ['chef_client'],
'ohai_time': ['ohai_time']
}
, function(err, chefRes) {
Regards!
The maximum is 1000 results per page, you can still request pages in order. The Chef API doesn't have a formal cursor system for pagination so it's just separate requests with a different start value, which can sometimes lead to minor desync (as in an item at the end of one page might shift in ordering and also show up at the start of the next page) so just make sure you handle that. That said, the fancy API in the client library you linked doesn't seem to expose that option, so you'll have to add it or otherwise workaround the problem. Check out https://github.com/sethvargo/chef-api/blob/master/lib/chef-api/resources/partial_search.rb#L34 for a Ruby implementation that does handle it.
We have run into similar issues with Chef libraries. One work-around you might find useful is if you have some node attribute that you can use to segment all of your nodes into smaller groups that are less than 1000.
If you have no such natural segmentation friendly already, a simple implementation would be to create a new attribute called segment and during your chef runs set the attribute's value randomly to a number between 1 and 5.
Now you can perform 5 queries (each query will only search for a single segment) and you should find all your nodes and if the randomness is working each group will be sized about 275 (1386/5).
As your node population grows you'll need to keep increasing the number of segments to ensure the segment sizes are less than 1000.
I have Windows CE 6 running an Atom N270 with an Intel 945GSE chipset. I wrote a small test application for Direct3D Mobile and have been experiencing some strange behaviour. I can only call Draw*Primitive once per frame. Calling multiple times either results in a white screen as though Present failed (even though no error is given) or only the first call seems to be processed.
With the message handling omitted the body of the render loop is as follows:
HandleD3DMError(pD3DMobileDevice->Clear(0,0,D3DMCLEAR_TARGET | D3DMCLEAR_ZBUFFER ,0xA50AB0F,1.0f,0),_T("Clear"));
if(HandleD3DMError(pD3DMobileDevice->BeginScene(),_T("BeginScene")))
{
HandleD3DMError(pD3DMobileDevice->SetTexture(0,pTexture), _T("SetTexture"));
HandleD3DMError(pD3DMobileDevice->SetStreamSource(0,m_pPPVVB[0],sizeof(PrimitiveVertex)),_T("SetStreamSource Failed"));
HandleD3DMError(pD3DMobileDevice->SetIndices(pIndexBuffer),_T("SetIndices failed"));
HandleD3DMError(pD3DMobileDevice->DrawIndexedPrimitive(D3DMPT_TRIANGLELIST,0,0,NUM_TRIS*3,0,NUM_TRIS/2),_T("Draw Primitive 1"));
//HandleD3DMError(pD3DMobileDevice->DrawIndexedPrimitive(D3DMPT_TRIANGLELIST,0,0,NUM_TRIS*3,NUM_TRIS/2 * 3,NUM_TRIS/2),_T("Draw Primitive 2"));
HandleD3DMError(pD3DMobileDevice->SetStreamSource(0,0,0),_T("SetStreamSource Null Failed"));
HandleD3DMError(pD3DMobileDevice->SetIndices(0),_T("SetIndices Null failed"));
HandleD3DMError(pD3DMobileDevice->EndScene(),_T("EndScene"));
}
HandleD3DMError(pD3DMobileDevice->Present(0,0,0,0),_T("Present Failed"));
You can switch the two DrawIndexedPrimitive() lines and both render four triangles each, which is correct. However when they are both present nothing is rendered. The HandleD3DMError() function displays a message box when an error occurs. This is done all through initialisation too. No errors are displayed at any time.
I have tried drawing different primitive types and drawing non-indexed vertex buffers with no success. I am able to draw 10,000 triangles using a single buffer but trying to use multiple buffers fails (I assume it is related to the multiple Draw calls issue).
The documentation on MSDN does not mention any limitation on Draw calls. They even mention cases where you would need to make multiple Draw calls. The official samples I've looked at
only ever call Draw*() once per frame too.
If I try and BeginScene() and EndScene() multiple times in the one frame nothing is rendered, not even the cleared colour.
I can provide all of the source if needed.
I appreciate any help anyone can give me.
Cheers
I'm developing an app (XNA Game) for the XBOX, which is a pretty simple app. The startpage contains tiles with moving gif images. Those gif images are actually all png images, which gets loaded once by every tile, and put in an array. Then, using a defined delay, these images are played (using a counter which increases every time a delay passes).
This all works well, however, I noticed some small lag every x seconds in the movement of the GIF images. I then started to add some benchmarking stuff:
http://gyazo.com/f5fe0da3ff81bd45c0c52d963feb91d8
As you can see, the FPS is pretty low for such a simple program (This is in debug, when running the app from the Xbox itself, I get an avg of 62fps).
2 important settings:
Graphics.SynchronizeWithVerticalRetrace = false;
IsFixedTimeStep = false;
Changing isFixedTimeStep to true increases the lag. The settings tile has wheels which rotate, and you can see the wheels go back a little every x seconds. The same counts for SynchronizeWVR, also increases lag.
I noticed a connection between the lag and the moment the garbage collector kicks in, every time it kicks in, there is a lag...
Don't mind the MAX HMU(Heap memory usage), as this is takes the amount of the start, the avg is more realistic.
Here is another screen from the performance monitor, however I don't understand much from this tool, first time I'm using it... Hope it helps:
http://gyazo.com/f70a3d400657ac61e6e9f2caaaf17587
After a little research I found the culprit.
I have custom components that all derive from GameComponent, and who get added to the Component list of the main Game class.
This was one (of a total of 2) major problem, causing to update everything that wasn't needing an update. (The draw method was the only one who kept the page state in mind, and only drew if needed).
I fixed this by using different "screens" (or pages as I called them), wich are the only components who derive from GameComponent.
Then I only update the page wich is active, and the custom components on that page also get updated. Problem fixed.
The second big problem, is the following;
I made a class which helps me on positioning stuff on the screen, relative that is, with percentages and stuff like that. Parent containers, aligns & v-aligns etc etc.
That class had properties, for size & vectors, but instead of saving the calculated value in a backing field, I recalculated them everytime I accessed a property. But calculating complex stuff like that uses references (to parent & child containers for example) wich made it very hard for the CLR, because it had alot of work to do.
I now rebuilt the whole positioning class to a fully functional optimized class, with different flags for recalculating when necessairy, and instead of drops of 20fps, I now get an average of 170+fps!