Setting the region in MKMapView occasionally results in the span being doubled. This bug seems to appear early in the map initialization phase. Although it's been reported elsewhere I wasn't able to find a descent existing workaround, so I'm posting my fix here. It relies on the fact that the regionThatFits method also produces the bug. I'm working with iPhone OS 3.12, but the bug was reported in 3.0 beta. This code lives in the UIViewController that contains your MKMapView:
- (BOOL)doubleSpanBugDetected:(MKCoordinateRegion)region fittedRegion:(MKCoordinateRegion)fitted
{
float latRatio = fitted.span.latitudeDelta / region.span.latitudeDelta;
float lonRatio = fitted.span.longitudeDelta / region.span.longitudeDelta;
BOOL latDoubled = (latRatio > 1.8 && latRatio < 2.2); // within 10% of x2
BOOL lonDoubled = (lonRatio > 1.8 && lonRatio < 2.2); // within 10% of x2
return latDoubled && lonDoubled;
}
- (void)setRegion:(MKCoordinateRegion)region animated:(BOOL)animated
{
//fixes setRegion span doubling bug
// see: http://osmorphis.blogspot.com/2009/12/mapkit-span-doubling-bug.html
// see: http://www.iphonedevsdk.com/forum/iphone-sdk-development/15810-mkmapview-needs-time-think.html
MKCoordinateRegion fitted = [self.mapView regionThatFits:region];
if ([self doubleSpanBugDetected:region fittedRegion:fitted]) {
MKCoordinateSpan span = MKCoordinateSpanMake(fitted.span.latitudeDelta/2.0, fitted.span.longitudeDelta/2.0);
MKCoordinateRegion regionHack = MKCoordinateRegionMake(fitted.center, span);
[self.mapView setRegion:regionHack animated:animated];
} else {
[self.mapView setRegion:fitted animated:animated];
}
}
Related
I've trained the HuggingFace RoBERTa model for my data (it's a very particular usage — hence the small model/vocabulary!) and tested successfully on Python. I exported the traced model to LibTorch for iOS, but prediction results on device do not match those in Python (giving different argmax token indices). My conversion script:
# torch = 1.5.0
# transformers = 3.2.0
config = RobertaConfig(
vocab_size=858,
max_position_embeddings=258,
num_attention_heads=6,
num_hidden_layers=4,
type_vocab_size=1,
torchscript=True,
)
model = RobertaForMaskedLM(config=config).from_pretrained('./trained_RoBERTa')
model.cpu()
model.eval()
example_input = torch.LongTensor(1, 256).random_(0, 857).cpu()
traced_model = torch.jit.trace(model, example_input)
traced_model.save('./exports/trained_RoBERTa.pt')
I have had problems in the past with another (vision) model that I trained in Python+GPU and converted to LibTorch for iOS, which were solved by adding map_location={'cuda:0': 'cpu'} to the torch.load() call in my conversion script. So I'm wondering whether: 1) that makes sense as a possible explanation in this situation?, and 2) how I can add the map_location option when loading using the .from_pretrained() syntax?
Just in case my Obj-C++ handling of the prediction results is to blame, here's the Obj-C++ code run on device:
- (NSArray<NSArray<NSNumber*>*>*)predictText:(NSArray<NSNumber*>*)tokenIDs {
try {
long count = tokenIDs.count;
long* buffer = new long[count];
for(int i=0; i < count; i++) {
buffer[i] = tokenIDs[i].intValue;
}
at::Tensor tensor = torch::from_blob(buffer, {1, (int64_t)count}, at::kLong);
torch::autograd::AutoGradMode guard(false);
at::AutoNonVariableTypeMode non_var_type_mode(true);
auto outputTuple = _impl.forward({tensor}).toTuple();
auto outputTensor = outputTuple->elements()[0].toTensor();
auto sizes = outputTensor.sizes();
// len will be tokens * vocab size -- sizes[1] * sizes[2] (sizes[0] is batch_size = 1)
auto positions = sizes[1];
auto tokens = sizes[2];
float* floatBuffer = outputTensor.data_ptr<float>();
if (!floatBuffer) {
return nil;
}
// MARK: This is probably a slow way to create this 2D NSArray
NSMutableArray* results = [[NSMutableArray alloc] initWithCapacity: positions];
for (int i = 0; i < positions; i++) {
NSMutableArray* weights = [[NSMutableArray alloc] initWithCapacity: tokens];
for (int j = 0; j < tokens; j++) {
[weights addObject:#(floatBuffer[i*positions + j])];
}
[results addObject:weights];
}
return [results copy];
} catch (const std::exception& exception) {
NSLog(#"%s", exception.what());
}
return nil;
}
Note that my init code in iOS does call eval() on the TorchScript model.
UPDATE: One observation; the way I've attempted to use my config when loading the trained model above results in the torchscript flag not being set — I assume it's ignoring my config entirely and getting it from the pretrained file. So I've moved it to from_pretrained('./trained_RoBERTa', torchscript=True), as outlined in the docs. Same problem with output on iOS, mind you...
UPDATE 2: I thought I'd try testing the traced model in Python. Not sure it's expected that this should work, but the output does match the same test in the original model:
traced_test = traced_model(input)
pred = torch.argmax(traced_test[0], dim=2).squeeze(0)
pred_str = tokenizer.decode(pred[1:-1].tolist())
print(pred_str)
Which makes me think there's something going with the iOS Obj-C++ execution. The code that loads the traced model/export does call .eval() on the model, btw (I realize that comes up as a possible explanation for differing outputs):
- (nullable instancetype)initWithFileAtPath:(NSString*)filePath {
self = [super init];
if (self) {
try {
auto qengines = at::globalContext().supportedQEngines();
if (std::find(qengines.begin(), qengines.end(), at::QEngine::QNNPACK) != qengines.end()) {
at::globalContext().setQEngine(at::QEngine::QNNPACK);
}
_impl = torch::jit::load(filePath.UTF8String);
_impl.eval();
} catch (const std::exception& exception) {
NSLog(#"%s", exception.what());
return nil;
}
}
return self;
}
UPDATE 3: Uhhhmmm... This is definitely a face-palm moment (following a wasted weekend)... I decided to return a flat NSArray from Obj-C and do the 2D array reshape in Swift, and aside from a shift of one token (I think it's just the [CLS]), the output is now correct. I guess my Obj-C really is that rusty. Sadly, I still don't see the issue, but it's working now so I'm going to surrender.
i hava a problem with drawing meshes in Vulkan.
I want to bind a UniformBufferObject in the following form to a Object.
void mainLoop() {
..
vulkanDrawing.Draw();
plane.UpdateUniformBuffers();
..
}
To get the currentImage, I created a method SetCurrentImage(uint32_t currentImage).
SetCurrentImage is set from VulkanDrawing::Draw() Method.
This current image is used in the UpdateUniformBuffers().
I get only a black screen if I run this application.
Since, I want to see a square.
In the past, I called the UpdateUniformBuffers Method with an imageIndex parameter in VulkanDrawing::Draw().
I think it could be a problem with the fences or semaphores. But I don't know how I shall fix it.
Does I use eventually a wrong Architecture?
I have attached important Methods:
void CVulkanDrawing::Draw()
{
vkWaitForFences(m_LogicalDevice.getDevice(), 1, &inFlightFences[currentFrame], VK_TRUE, std::numeric_limits<uint64_t>::max());
vkResetFences(m_LogicalDevice.getDevice(), 1,inFlightFences[currentFrame]);
uint32_t imageIndex;
vkAcquireNextImageKHR(m_LogicalDevice.getDevice(), m_Presentation.GetSwapChain(), std::numeric_limits<uint64_t>::max(), imageAvailableSemaphores[currentFrame], VK_NULL_HANDLE, &imageIndex);
for(unsigned int i = 0; i < m_VulkanMesh.size(); i++)
{
//m_VulkanMesh.at(i).UpdateUniformBuffers(imageIndex);
m_VulkanMesh.at(i).SetCurrentImage(imageIndex);
}
VkSubmitInfo submitInfo = {};
...
currentFrame = (currentFrame + 1) % MAX_FRAMES_IN_FLIGHT;
}
void CVulkanMesh::UpdateUniformBuffers()
{
...
vkMapMemory(m_LogicalDevice.getDevice(), uniformBuffersMemory[this->m_CurrentImage], 0, sizeof(ubo), 0, &data);
memcpy(data, &ubo, sizeof(ubo));
vkUnmapMemory(m_LogicalDevice.getDevice(), uniformBuffersMemory[this->m_CurrentImage]);
}
void CVulkanMesh::SetCurrentImage(uint32_t currentImage)
{
this->m_CurrentImage = currentImage;
}
I have additionally created a branch named: https://github.com/dekorlp/VulkanWrapper/tree/VulkanTest
I hope you can help me :)
Best regards
Pixma
After people told me to shorten the program I did it and here is the shortened version of the program with the same error as stated above.It only appears after a few moments into the program.If i hit continue the program works fine.However see the movement function?It does't work.The sprite refuses to move in any direction.However if i give a very large floating value in the move,then the sprite is displaced from it's position when i start the program and it stays there in that position with no further movement.For example if i write sprite.move(400.f,400.f) the sprite moves from (0,0) to (400,400) and stays there.It doesn't move any more.
Here's the shortened version of the code:
#include"SFML\Graphics.hpp"
#include<iostream>
int main()
{
sf::RenderWindow window(sf::VideoMode(640, 480), "CHECK",sf::Style::Default);
std::cout << "WORKS";
sf::Texture text;
text.loadFromFile("bahamut.png");
sf::Sprite sprite;
sf::Clock frap;
sprite.setTexture(text);
while (window.isOpen())
{
float fps = frap.restart().asSeconds();
sf::Vector2f movements;
if (sf::Keyboard::isKeyPressed(sf::Keyboard::Key::A))
{
movements.y = 0;
movements.x = -1 * fps;
}
else
{if (sf::Keyboard::isKeyPressed(sf::Keyboard::Key::D))
{
movements.y = 0;
movements.x = 1 * fps;
}
else
{ if (sf::Keyboard::isKeyPressed(sf::Keyboard::Key::S))
{
movements.y = 1 * fps;
movements.x = 0;
}
else
{
if (sf::Keyboard::isKeyPressed(sf::Keyboard::Key::W))
{
movements.y = -1 * fps;
movements.x = 0;
}
else
{
movements.x = 0;
movements.y = 0;
}
}
}
}
sprite.move(movements);
window.clear();
window.draw(sprite);
window.display();
}
return 0;
}
I improved upon the code and it still produces the same results and error.
On using the dissassembler i saw the crash occurs at
00B37AEE cmp esi,esp
in window.display().
when i create a function and use it to display the sprite,the movement occurs but witthout the unction nada
Your logic says your movement is 0/0 if W is not pressed. The else of the W pressed block overrides all prior settings. And moving the sprite should happen before you display.
I cannot see a reason for the null pointer exception, but that is what the debugger is for. Next time this happens, debug.
Oh and it's int main(), not void. I know the compiler tolerates this error, but it's still an error and undefined behavior.
Firstly, sorry for the poor question title, I didn't know exactly what to put!
So I have an OpenGL application running from a SFML Window context.
I previously posted a question about poor performance, but that issue seems to be solved now.
As you can see on the images I have uploaded, something rather odd is happening. I don't know really how to describe it, but it looks like the right half of the window shouldn't be there!
Anyone any ideas on the problem?
Here is my code:
sf::ContextSettings settings;
settings.depthBits = 32;
settings.stencilBits = 8;
settings.antialiasingLevel=4;
settings.majorVersion = 3;
settings.minorVersion = 0;
sf::Window window(sf::VideoMode(800, 600), "insert title", sf::Style::Default, settings);
window.setVerticalSyncEnabled(true);
bool running = true;
while(running)
{
sf::Event e;
while(window.pollEvent(e))
{
if(e.type == sf::Event::Closed)
{
running = false;
}
if(e.type == sf::Event::Resized)
{
glViewport(0, 0, e.size.width, e.size.height);
gluLookAt(0,0,-1, 0,0,0, 0,1,0);;
}
}
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glutSolidSphere(1, 12, 12);
window.display();
}
Turns out this is caused by copying and pasting code.
Above the code shown, I had the lines:
sf::ContextSettings settings;
settings.depthBits = 24;
settings.stencilBits = 0;
settings.antialiasingLevel = 0;
settings.majorversion = 3;
settings.minorversion = 2;
The minor version was incorrect. Removing the lines ' settings.majorversion = 3;' and 'settings.minorversion = 2;' fixed the issue!
As an experiment I changed the major to 4. This caused the program to crash all together.
I've just upgraded my project from cocos2d 1.0.1 to 2.0 and after a lot of tweaking and all, I'm unable to change the default color of a CCLabelTTF like I did before (And this way I avoid one line of code for each label I create). Before, I was doing like that :
In CCLabelTTF.m :
- (id) initWithString:(NSString*)str dimensions:(CGSize)dimensions alignment:(CCTextAlignment)alignment lineBreakMode:(CCLineBreakMode)lineBreakMode fontName:(NSString*)name fontSize:(CGFloat)size
{
if( (self=[super init]) ) {
dimensions_ = CGSizeMake( dimensions.width * CC_CONTENT_SCALE_FACTOR(), dimensions.height * CC_CONTENT_SCALE_FACTOR() );
alignment_ = alignment;
fontName_ = [name retain];
fontSize_ = size * CC_CONTENT_SCALE_FACTOR();
lineBreakMode_ = lineBreakMode;
color_ = ccBLACK;
[self setString:str];
}
return self;
}
I was changing the color inside this method since every "initWithString..." methods are returning this one, but even if I do so in cocos2D 2.0, it doesn't work.
Here's my new CCLabelTTF.m :
- (id) initWithString:(NSString*)str dimensions:(CGSize)dimensions hAlignment:(CCTextAlignment)alignment vAlignment:(CCVerticalTextAlignment) vertAlignment lineBreakMode:(CCLineBreakMode)lineBreakMode fontName:(NSString*)name fontSize:(CGFloat)size
{
if( (self=[super init]) ) {
// shader program
self.shaderProgram = [[CCShaderCache sharedShaderCache] programForKey:SHADER_PROGRAM];
dimensions_ = dimensions;
hAlignment_ = alignment;
vAlignment_ = vertAlignment;
fontName_ = [name retain];
fontSize_ = size;
lineBreakMode_ = lineBreakMode;
color_ = ccBLACK;
[self setString:str];
}
return self;
}
Is it because of the "ShaderProgram" thingy that wasn't there before 2.0? Please help I've tried everything so far :(
I even searched in all my project if there was a file containing "ccWHITE" or "{255,255,255}" but there's none related to CCLabelTTF (except for CCSprite, but if I change it to ccBLACK, all my sprites becomes black)
Instead of setting the ivar, use the accessor for the property:
self.color = ccBlack;
Also, you should not modify CCLabelTTF. If you want to change behaviour, make a subclass.