change the language of the broker on the onlyoffice desktop app - onlyoffice

how can i change the language of the broker? for all texts
I didn't find where I can make the change.
print screen

Related

How to get selected text of active application from another app

It's more of a question in "where to start?" category.
I need to make an app that can read currently selected text in Linux. It will only be used in KDE with KWin and X11 (Kubuntu, if that matters).
For example this is expected UX: I'm selecting text in Kate (or any other app) -> press some global shortcut -> either app starts or already running in background and reads that selected text from Kate and does something with it (saves to file, for example). And it should be able to read selection from any application (browsers, GTK apps, Qt apps, etc).
Is it even possible? Where should I start? What subsystem should have this info of "currently focused window" and "selected text in that window"? How can this be achieved?
I would very much appreciate any guidance or insight, because I'm completely new to linux/KDE app development.
Maybe there's an open-source app that does this so I can try and learn from sources?
No requirements on lang and framework but Qt/C++ is preferable.
Thank you

Demystifying the Virtual Keyboard and Touchpad in Windows 10

I'm new to Windows development, and am looking for assistance on where to get started for a particular project.
In short, I want to create a windowed application that allows a user to send keyboard and mouse inputs to another application, by interacting with various UI controls via touch. Essentially a custom on-screen keyboard/touchpad that can be used for sending keyboard-shortcuts to other applications.
There are two applications in Windows 10 that perform exactly the way I would want my new app to - the On-Screen Keyboard and Touchpad:
https://support.microsoft.com/en-us/help/4337906/windows-10-open-the-on-screen-touchpad
https://support.microsoft.com/en-us/help/10762/windows-use-on-screen-keyboard
At the most basic level, I want to define my own interface (or allow the end user to define their own), and use the same code that the onscreen keyboard/touchpad are using for handling touch events and injecting inputs into the system.
I'm uncertain at what level I would need to start to get the functionality I need - UWP? WPF? C++?
If anyone has any insight into how the on-screen utilities were built, I think that would give me an excellent head start.

Listen to Keyboard and Mouse Chrome Hosted App

I am building a Chrome Hosted App, its some sort of time tracking software, which will monitor user activity, if he is working or not. The App is also supposed to listen to users keyboard strokes and mouse movements. I am not able to find any API in the documentation. Is it possible or not?
Thank You
It is possible with chrome.input.ime API.
This allows your extension to handle keystrokes, set the composition, and manage the candidate window.
You can actually use, onKeyEvent for keyboard events or onCandidateClicked for mouse events as given examples. You may go through the documentation for more information.

Remote control a Chrome Extension

I've written a non-published (personal) Chrome extension that performs page checking and then performs actions such as opening new tabs if certain conditions are met. I would like to be able to "remote control" it from my phone though, e.g. turn on or off or adjust settings when I'm away from my desk.
I considered if the extension can read/write to a file in Dropbox, which I could then edit from my phone too, or any other device. But I'm not sure if extensions are allowed to arbitrarily read/write in the filesystem, or only "apps". Any other suggestions?
Assuming you can't directly connect to your computer (otherwise wOxxOm's answer is valid)..
You could make a companion phone app and use GCM push messages; your phone would message your server via it (which can be hosted on a free App Engine tier easily if it's just for your private use) and the server will push out the message.
Though it'll probably be much easier to just have said App Engine server up and providing a WebSocket endpoint that your extension can connect to to receive commands in real-time, and some sort of API / control panel on the web (authenticated, of course).
Any free webserver-based solution would lag, as bad as 500ms, I think.
Try making a complementary native PC program: mobile apps for remote control usually have their PC part running as a background service or an application with just a shelltray icon. Such program opens a TCP/UDP port on PC and listens for commands from the mobile app, and can communicate with your extension via Chrome's native messaging API.

Create Application "web" protocol

I am using chrome and I see this line when hovering over a link
steam://run/17730
An example exists at the link below, click play game which opens a dialog and then hover over "Yes I have steam".
http://store.steampowered.com/app/17730/
This appears to be a restful command to a client application using an application specific protocol, in this case the "Steam game management service"
My question is this
if it is not a local command what is it?
If it is a local command, how could I implement something like this using say a Bill:// protocol.
I can't find anything on this so this may be tagged incorrectly, I apologize for that.
It seems that steam has registered a protocol with the browser which communicates with the local steam process. The following link might get you started with registering your own protocol in firefox at least:
https://support.steampowered.com/kb_article.php?ref=2087-MZES-9065
I would guess that there are similar links on the steam support site for other browsers.
The other part of this is probably going to be writing a simple local web server that can receive and respond to these requests. I'm not sure what language you are working in but an example for C# is the following: http://www.codeproject.com/Articles/36517/Communicating-from-the-Browser-to-a-Desktop-Applic. Best of luck!

Resources