Are Language servers remote or local processes - language-server-protocol

A lot of IDE's and plugins talk about language servers that provide some of the IDE features like auto-completion, linting, and highlighting. Is the language server just some local process that is also running on my machine or is my code being sent somewhere for analysis (would features stop working if I have no internet). Also if the code is being sent somewhere how is that safe?

The process is nicely explained here. In case of VSCode extensions, 99% of all LSP will be local. But at least in theory, it might as well run over network.
Also if the code is being sent somewhere how is that safe?
Language server protocol only defines the protocol itself, no encryption details or permission constraints. If you use some client to interact with it (say, an IDE extension), the client itself can already do what it wants without you noticing. The server is only yet another implementation detail.

Related

Is it possible to move a simple .exe based calculator tool to cloud for multi-users?

A few years back, we had a programmer create a type of calculator for use in the audio industry. It was created/compiled in C - and is an .exe we offer for free download. once a user downloads it, they can run it locally.
Our goal is to move that application/calculator to the "cloud" where any user can hypothetically visit a URL, and the calculator is there, running, and ready for user interaction by multiple users.
Is what I describe possible?
Will I need Azure?
What exactly in Azure will I need (i.e. which products/resources)?
Do I need to use the compiled/decompiled version?
Will we need to change any code in the .exe to make this work?
What do I not know that I should know?
I sincerely appreciate any and all input from those that are probably reading this and fully understanding the simplicity of it while I struggle to wrap my head around it.
Thank you!
#Alex, Thank you so much for the response. I apologize I was late getting back to you - have 4 kiddos home - I am actually not sure how to answer that fully. Here is a link to the freely downloadable file we offer to anyone and everyone - it does require an email, but that is just for notifications when the version is updated. https://caf.prosoundtraining.com/verify/ and here is the home page of the site: https://caf.prosoundtraining.com - It works by the user downloading the program (which is what we would like to move to the cloud), and then inputing various values for determining what watt amplifier to use based on speaker selection, length of cables, distance of audience from speakers, etc. basically several calculation tools like that, and then the main program that allows users to visualize amplifier performance via running signal tests through the amplifier. Once a user downloads the program, the calculators (which are separate in the decompiled version I have), they can choose to just use the calculators, or the complete program.
Is what I describe possible?
Well, If you are talking about a pure command-line application: yes.
But, If you are talking about a GUI application: If your application not fully usable from the command line, Go back to the source code and strip down all windows, buttons, UI stuff, etc.. AND make it usable from the command line.
Will I need Azure?
Any cloud provider will do just fine. Depending on the traffic, an old laptop with an internet connection might be enough for your case.
What exactly in Azure will I need (i.e. which products/resources)?
Azure virtual machine with a publicly accessible IP.
Do I need to use the compiled/decompiled version?
Will we need to change any code in the .exe to make this work?
What do I not know that I should know?
Using decompiled version (Source code):
WebAssembly
WebAssembly is a new type of code that can be run in modern web browsers — it is a low-level assembly-like language with a compact binary format that runs with near-native performance and provides languages such as C/C++, C# and Rust with a compilation target so that they can run on the web. It is also designed to run alongside JavaScript, allowing both to work together.
Emscripten
Compile C and C++ code, or any other language that uses LLVM, into WebAssembly, and run it on the Web, Node.js, or other wasm runtimes.
Using compiled verison (executable):
Common Gateway Interface-ish approach
Common Gateway Interface (CGI) is an interface specification for web servers to execute programs like console applications (also called command-line interface programs) running on a server that generates web pages dynamically. Such programs are known as CGI scripts or simply as CGIs. The specifics of how the script is executed by the server are determined by the server. In the common case, a CGI script executes at the time a request is made and generates HTML.
My suggestion would be to use python, PHP or any scripting language you are familiar with to spin up a webserver and execute command based on incoming requests.
Python example:
import subprocess
from flask import Flask
app = Flask(__name__)
#app.route('/calculator') # Accessible from http://[ipaddress]:[port]/calculator
def hello_RedPanda():
command = "..." # as if you are running your program from cmd.
result = subprocess.check_output(command) # execute given command
return result # to your web browser
Once you are done with all the back-end pluming, you can add your buttons and tabs back by rebuilding your UI on the client-side using HTML, CSS and Javascript.
Building the UI in client-side is actually the easy part. The really tricky part in your case is (as I mentioned above) making your application usable from the command line.
See more:
WebAssembly
Emscripten
CGI
A case similar to yours

Transfer protocol for sending user uploaded files to a remote server?

I'm used to working with user-uploaded files to the same server, and transferring my own files to a remote server. But not transferring user-uploaded files to a remote server.
I'm looking for the best (industry) practice for selecting a transfer protocol in this regard.
My application is running Django on a Linux Server and the files live on a Windows Server.
Does it not matter which protocol I choose as long as it's secure (FTPS, SFTP, HTTPS)? Or is one better than the other in terms of performance/security specifically in regards to user-uploaded files?
Please do not link to questions that explain the differences of protocols, I am asking specifically in the context of user-uploaded files.
As long as you choose a standard protocol that provides (mutual) authentication, encryption and message authentication, there is not much difference security-wise. If all of this is provided by a layer of TLS in your chosen protocol (like in all of your examples), you can't make a big mistake on a design level (but implementation is key, many security bugs are bugs of implementation, and not design flaws). Such protocols might differ in the supported list of algorithms for different purposes though.
Performance-wise there can be quite significant differences, it depends on what you want to optimize for. If you choose HTTPS, you won't be able to keep a connection open for a long time, and would most probably have to bear the overhead of the whole connection setup with authentication and everything, for every transmitted file. (Well, you can actually keep a https connection open, but that would be quite a custom implementation for such file uploads.) Choosing FTPS/SFTP you will be able to keep a connection open and transmit as many files as you want, but would probably have to have more complex error handling logic (sometimes connections terminate without the underlying sockets knowing about it for a while and so on). So in short I think HTTPS would be more resilient, but secure FTP would be more performant for many small files.
It's also an architecture question, by using HTTPS, you would be able to implement all of this in your application code, while something like FTP would mean dependence on external components, which might be important from an operational point of view (think about how this will actually be deployed and whether there is already a devops function to manage proper operations).
Ultimately it's just a design decision you have to make, the above is just a few things that came to mind without knowing all the circumstances, and not at all a comprehensive list of things to consider.

Disable Networking in Electron

electron.js is a user interface toolkit that allows a web application to operate as an arbitrary GUI.
However, there are some applications that should be considered sensitive - for instance, a GUI for banking should have strong assurances that it's not doing anything mischievous.
I'm wondering if the electron executable (or node.js itself) would allow operation in a mode where networking is outright disabled - that way, as a consumer, I can at least be confident a user interface isn't sending my password off-site.
Something like ./node_modules/.bin/electron --no-networking index.js would be very convenient, albeit a far cry.
I haven't tried whether it works or not.
The following lines simulate the offline mode in chromium.
// To emulate a network outage.
window.webContents.session.enableNetworkEmulation({ offline: true });

secure data transfer between chrome extension and native messaging host [duplicate]

I have a NPAPI plugin for sign-in data on website.
I want to replace it by Native Messaging technology. I have read the documentation, but I have a question : Is this technology safe?
Can hackers catch data in transfer from JavaScript to native host app and back?
Edit: merging in a better-worded question:
How secure is stdio data transfer ?
Is there a way for man-in-middle attack for such data transfer ?
It is, in principle, possible to inspect stdio calls made by an executable.
For instance, on Linux systems, you can use strace for that purpose. I don't know a similar Windows tool, but it's conceivable that it exists.
That would be akin to attaching a debugger to the browser/native host itself, and can only be done by someone who has access to the local machine with the same user credentials or administrative access. In particular, the user running Chrome can do it - just like he/she can use Dev Tools to inspect and intercept the data at the JavaScript side.
So, yes, in principle that can be intercepted, but only by someone will full rights to execute/debug code on the system it's running on, and OS takes care not to allow normal users to inspect processes of other users in this way.
You realize, of course, that Native Messaging will ONLY work within the bounds of the machine: With native messaging the browser will communicate with your host application over stdin/stdout.
So what exactly is the problem here? If the Hackers are capable of listening to your stdin/stdout they are already on your machine - you've already lost.
Not really, sometimes hackers can find XSS in vulnerable site, then it may be possible to use Native Messaging to execute command on Victim system

Open Aource application/protocol for internet applications

In near future I'll need to start working on a new project that consist of highly loaded TCP/IP servers and clients that communicate to that server. I know the basics of TCP/IP and can make the server and clients talk over the wire.
The problem is that I need to find some ways to protect server against other "clients" that can send bogus data and may crash the server. I'm looking for any ideas or recommendations for an application-level protocol that I can use for my application. Pretty sure there must be some kind of open-source MMORPG game that has already implemented such a protocol.
Any other ideas are very welcome.
P.S. I have checked already the WorldForge project.
Use authentication and write your server so that bogus data doesn't crash it. You can also utilize firewalls where appropriate.
Have a look at http://www.devmaster.net/ for game development. I've read many useful articles there.

Resources