Hello Fellow Developers...
After 2 months, I am excited to reach the final phase of my first Universal App for Windows 10. It's a game I built for my son actually.
BUT....
The Tiles and Icons section is becoming a painful process, so many different icons I can't remain focus! My app is for all devices, hence the complication.
Please, can anyone point me in the right direction? This is all that my [FIRST] app needs to submit for certification...
[EDIT]
I didn't actually asked the question correctly. What I meant to ask is, how can I generate all the needed images for the icons/tiles for my app? I know Photoshop Very well and used an action I found, but it doesn't create all of them.
Thanks a Mill.
Matias
There is an extension for Visual Studio: UWP TileGenerator
You have also this tool for Photoshop: http://go.microsoft.com/fwlink/p/?LinkId=760394. Link found on design guidelines.
Finally, this website allows to generate icons for several kind of application including UWP applications: http://cthedot.de/icongen
Related
For more context, I'm developing a Augmented Image Android app. Because of a series of unfortunate events, I ended up trying to develop this having absolute 0 Android experience, but here I am. The thing is, I can't find good tutorials on this topic (ARCore in Android Studio), so I am taking Google example apps and trying to understand how they work.
It seems that it enters in detail about OpenGL, but I don't have the time to learn it properly. I found this thing called SceneViewer, which seemed just what I need. An easy way to charge and display a model/scene to my ARCore anchors. But, it seems discontinued. Or for what I have found, it isn't compatible any longer with Android Studio.
Is there anything out there that could serve this purpose? Or Scene Viewer can still do this job?
I'm new to Kotlin and Android studio, I previously completed a project of mine with KivyMD, I want to replicate the project with Kotlin using Android studio, the project has 56 screens, I've learnt that activities represents screen in Android Studio, which means I'll have to create 55 more activities in additional to MainActivity, thinking that might be a bit much I googled if they is a limit to number of activities I can create I found it's 10..
So how do I put the contents of other screens? or Could I just go ahead and create all 56 activities??
Thanks to you for your help in advance..
Jetpack is a bunch of libraries created to ease app development and help people follow best practices, and the official recommendation is a single-Activity app:
Navigation
While activities are the system provided entry points into your app's UI, their inflexibility when it comes to sharing data between each other and transitions has made them a less than ideal architecture for constructing your in-app navigation. Today we are introducing the Navigation component as a framework for structuring your in-app UI, with a focus on making a single-Activity app the preferred architecture.
By using the Navigation component, your Activity is basically a container for Fragments, which act as your "screens". Navigation handles a lot of the boilerplate for swapping them in and out and maintaining history, and you focus more on connecting them together in a navigation graph.
It's a lot to learn at first, but it's definitely worth it, and if you're going to be messing with 56 destinations then you'll probably end up saving a lot of time letting it handle the bulk of the work! Here's a codelab tutorial you can do to get up to speed with it, and here's the documentation which starts with the basics and gets to some of the more complex uses
Have freelance job on VR - Business app and need to make it in VR and noVR modes.
Can I develop it in Unity and what problems can I face? Or can I make noVR-part in Android studio and then combine it with Unity VR-part?
Searching on the internet and can't find a proper answer.
Unity 4.6+ has a new UI thats canvas based, its pretty ok but not as nice as a modern MVVM enabled UI frameworks. There are assets you can buy that enable MVVM in Unity UI. I would recommend this if your UI is complex
https://www.assetstore.unity3d.com/en/#!/search/page=1/sortby=rating/query=mvvm
The big problem with using Unity for any kind of business app type thing is that when entering text into GUI.TextFields you can't edit the text directly in the textbox. For any kind of form that has a bunch of textboxes and things to interact with, you need to do it in UIKit.I myself wouldnt use unity for what you want to do, try to look into the google Android SDK.
When you use the "Holographic DirectX 11 App" template in Visual Studio, it creates an app that occupies the entire HoloLens view (I believe it's call the Holographic View).
How do you build a Holographic App like the Hologram demos where you can re-size the box and place the app in the Holographic Shell?
BTW, can someone with a higher reputation create a new tag "hololens-directx". I think there are beginning to be more DirectX development and this would help distinguish from the Unity questions.
Short Answer: Not Yet
Microsoft is reserving development right such as this as well as live tiles. Of the many forum posts regarding this, an official Microsoft team has formally responded to this one below.
To understand more options available to you in your app development, see this page about the app model.
The ability to place holograms into the Holographic Shell is not exposed to app developers.
Hologram app is an inbox app and third party apps cannot replicate this functionality.
App developers can only place GLTF model into Shell as a link to their app.
https://learn.microsoft.com/en-us/uwp/schemas/appxpackage/uapmanifestschema/element-uap5-mixedrealitymodel
I believe these statement are true:
1) All Universal Apps Work As Holograms
2) Universal Apps can be built using HTML/JS
Does this mean I can build a holographic universal app using web technologies? For example a holographic visualizations dashboard in D3.js?
It's still too early to say definitively, but here is some info I could find.
UPDATE: There is now a library called HoloJS which allows devs to write apps in html.
First your assumptions 1 and 2 are correct. There are ways to build UWP (Universal Windows Platform) apps in javascript/html. This means you could write a UWP JS app which can run webgl in a 2D window placed somewhere in your environment. You could also run your app on Microsoft Edge.
So if all you want to do is display a 2D dashboard in a 3D room, yes it looks very possible. If you want the application to render 3D objects all around the user, there are some problems you will need to work around.
Quoted from https://forums.hololens.com/discussion/80/is-it-possible-to-use-webgl-with-hololens-repost#latest:
"Holographic apps are powered by the same graphics stack as the rest of the Windows 10 ecosystem. That means that just like the Xbox and Win32 games, apps for HoloLens are built on top of DirectX."
So you're kind of stuck with either Unity or DirectX if you want 3D visualizations that surround the user. BUT there could be a way...
A user at the bottom of this page http://forums.hololens.com/discussion/80/is-it-possible-to-use-webgl-with-hololens-repost said:
"That is interesting idea. If I understand correctly, you are trying to hook your Edge browser with your HoloLens and project 3D graphics with WebGL on your Edge browser based on the REST APIs available from HoloLens"
So, you could perhaps fullscreen your app or find some way to ensure it is in front of your user's face and then use a server to direct API calls from the hololens to your web-app in order to transform your geometry around the user.
It might be worth it to look into integrating D3 visualizations inside a threejs app if you want the holographic visualizations. https://www.youtube.com/watch?v=bWjn1N4SJsk
If you just want a 2D screen in the environment then develop as normal and use Edge inside the hololens.