Rust lsp-server get editing text - rust

I'm currently writing a language server in rust and have the following problem: I want to add code completion sensitive to the code already been written. I followed the the example for the crate lsp-server I'm using. How can I get the text of the file currently in edit?
Sadly there is not much documentation. I've already tried to look my way through the GitHub repository of rust-analyzer but it is to overwhelming and I can't find a solution. I know how to get the file path but when I open the file I only get the saved version not the edited one.
Currently my code does not much differ from the example, but I add it anyway:
mod completion_data;
use std::error::Error;
use lsp_server::{Connection, ExtractError, Message, RequestId, Response};
use lsp_types::notification::{Initialized, Notification};
use lsp_types::request::{
CodeLensRequest, Completion, DocumentColor, DocumentHighlightRequest, HoverRequest, Initialize,
Request, ShowDocument,
};
use lsp_types::{
CompletionItem, CompletionList, CompletionOptions, TextDocumentChangeRegistrationOptions,
TextDocumentIdentifier, TextDocumentSyncCapability, TextDocumentSyncOptions, WorkspaceEdit,
};
use lsp_types::{InitializeParams, ServerCapabilities};
use serde::de::DeserializeOwned;
use crate::completion_data::get_completion_items;
fn main() -> Result<(), Box<dyn Error + Sync + Send>> {
// Note that we must have our logging only write out to stderr.
eprintln!("starting generic LSP server");
// Create the transport. Includes the stdio (stdin and stdout) versions but this could
// also be implemented to use sockets or HTTP.
let (connection, io_threads) = Connection::stdio();
// Run the server and wait for the two threads to end (typically by trigger LSP Exit event).
let server_capabilities = serde_json::to_value(&ServerCapabilities {
completion_provider: Some(CompletionOptions {
trigger_characters: Some(vec!["&".to_string()]),
..Default::default()
}),
..Default::default()
})
.unwrap();
let initialization_params = connection.initialize(server_capabilities)?;
main_loop(connection, initialization_params)?;
io_threads.join()?;
// Shut down gracefully.
eprintln!("shutting down server");
Ok(())
}
fn main_loop(
connection: Connection,
params: serde_json::Value,
) -> Result<(), Box<dyn Error + Sync + Send>> {
let _params: InitializeParams = serde_json::from_value(params).unwrap();
for msg in &connection.receiver {
match msg {
Message::Request(request) => {
if connection.handle_shutdown(&request)? {
return Ok(());
}
match request.method.as_str() {
Completion::METHOD => {
let (id, compilation) = cast::<Completion>(request)?;
//TODO Get the current text and based on it suggest completion
}
_ => {}
}
}
Message::Response(resp) => {
eprintln!("got response: {:?}", resp);
}
Message::Notification(not) => {
eprintln!("got notification: {:?}", not);
}
}
}
Ok(())
}
fn cast<R>(
req: lsp_server::Request,
) -> Result<(RequestId, R::Params), ExtractError<lsp_server::Request>>
where
R: lsp_types::request::Request,
R::Params: serde::de::DeserializeOwned,
{
req.extract(R::METHOD)
}

Related

Receiver on tokio's mpsc channel only receives messages when buffer is full

I've spent a few hours trying to figure this out and I'm pretty done. I found the question with a similar name, but that looks like something was blocking synchronously which was messing with tokio. That very well may be the issue here, but I have absolutely no idea what is causing it.
Here is a heavily stripped down version of my project which hopefully gets the issue across.
use std::io;
use futures_util::{
SinkExt,
stream::{SplitSink, SplitStream},
StreamExt,
};
use tokio::{
net::TcpStream,
sync::mpsc::{channel, Receiver, Sender},
};
use tokio_tungstenite::{
connect_async,
MaybeTlsStream,
tungstenite::Message,
WebSocketStream,
};
#[tokio::main]
async fn main() {
connect_to_server("wss://a_valid_domain.com".to_string()).await;
}
async fn read_line() -> String {
loop {
let mut str = String::new();
io::stdin().read_line(&mut str).unwrap();
str = str.trim().to_string();
if !str.is_empty() {
return str;
}
}
}
async fn connect_to_server(url: String) {
let (ws_stream, _) = connect_async(url).await.unwrap();
let (write, read) = ws_stream.split();
let (tx, rx) = channel::<ChannelMessage>(100);
tokio::spawn(channel_thread(write, rx));
tokio::spawn(handle_std_input(tx.clone()));
read_messages(read, tx).await;
}
#[derive(Debug)]
enum ChannelMessage {
Text(String),
Close,
}
// PROBLEMATIC FUNCTION
async fn channel_thread(
mut write: SplitSink<WebSocketStream<MaybeTlsStream<TcpStream>>, Message>,
mut rx: Receiver<ChannelMessage>,
) {
while let Some(msg) = rx.recv().await {
println!("{:?}", msg); // This only fires when buffer is full
match msg {
ChannelMessage::Text(text) => write.send(Message::Text(text)).await.unwrap(),
ChannelMessage::Close => {
write.close().await.unwrap();
rx.close();
return;
}
}
}
}
async fn read_messages(
mut read: SplitStream<WebSocketStream<MaybeTlsStream<TcpStream>>>,
tx: Sender<ChannelMessage>,
) {
while let Some(msg) = read.next().await {
let msg = match msg {
Ok(m) => m,
Err(_) => continue
};
match msg {
Message::Text(m) => println!("{}", m),
Message::Close(_) => break,
_ => {}
}
}
if !tx.is_closed() {
let _ = tx.send(ChannelMessage::Close).await;
}
}
async fn handle_std_input(tx: Sender<ChannelMessage>) {
loop {
let str = read_line().await;
if tx.is_closed() {
break;
}
tx.send(ChannelMessage::Text(str)).await.unwrap();
}
}
As you can see, what I'm trying to do is:
Connect to a websocket
Print outgoing messages from the websocket
Forward any input from stdin to the websocket
Also a custom heartbeat solution which was trimmed out
The issue lies in the channel_thread() function. I move the websocket writer into this function as well as the channel receiver. The issue is, it only loops over the sent objects when the buffer is full.
I've spent a lot of time trying to solve this, any help is greatly appreciated.
Here, you make a blocking synchronous call in an async context:
async fn read_line() -> String {
loop {
let mut str = String::new();
io::stdin().read_line(&mut str).unwrap();
// ^^^^^^^^^^^^^^^^^^^
// This is sync+blocking
str = str.trim().to_string();
if !str.is_empty() {
return str;
}
}
}
You never ever make blocking synchronous calls in an async context, because that prevents the entire thread from running other async tasks. Your channel receiver task is likely also assigned to this thread, so it's having to wait until all the blocking calls are done and whatever invokes this function yields back to the async runtime.
Tokio has its own async version of stdin, which you should use instead.

Tokio channel sends, but doesn't receive

TL;DR I'm trying to have a background thread that's ID'd that is controlled via that ID and web calls, and the background threads doesn't seem to be getting the message via all the types of channels I've tried.
I've tried both the std channels as well as tokio's, and of those I've tried all but the watcher type from tokio. All have the same result which probably means that I've messed something up somewhere without realizing it, but I can't find the issue:
use std::collections::{
hash_map::Entry::{Occupied, Vacant},
HashMap,
};
use std::sync::Arc;
use tokio::sync::mpsc::{self, UnboundedSender};
use tokio::sync::RwLock;
use tokio::task::JoinHandle;
use uuid::Uuid;
use warp::{http, Filter};
#[derive(Default)]
pub struct Switcher {
pub handle: Option<JoinHandle<bool>>,
pub pipeline_end_tx: Option<UnboundedSender<String>>,
}
impl Switcher {
pub fn set_sender(&mut self, tx: UnboundedSender<String>) {
self.pipeline_end_tx = Some(tx);
}
pub fn set_handle(&mut self, handle: JoinHandle<bool>) {
self.handle = Some(handle);
}
}
const ADDR: [u8; 4] = [0, 0, 0, 0];
const PORT: u16 = 3000;
type RunningPipelines = Arc<RwLock<HashMap<String, Arc<RwLock<Switcher>>>>>;
#[tokio::main]
async fn main() {
let running_pipelines = Arc::new(RwLock::new(HashMap::<String, Arc<RwLock<Switcher>>>::new()));
let session_create = warp::post()
.and(with_pipelines(running_pipelines.clone()))
.and(warp::path("session"))
.then(|pipelines: RunningPipelines| async move {
println!("session requested OK!");
let id = Uuid::new_v4();
let mut switcher = Switcher::default();
let (tx, mut rx) = mpsc::unbounded_channel::<String>();
switcher.set_sender(tx);
let t = tokio::spawn(async move {
println!("Background going...");
//This would be something processing in the background until it received the end signal
match rx.recv().await {
Some(v) => {
println!(
"Got end message:{} YESSSSSS#!##!!!!!!!!!!!!!!!!1111eleven",
v
);
}
None => println!("Error receiving end signal:"),
}
println!("ABORTING HANDLE");
true
});
let ret = HashMap::from([("session_id", id.to_string())]);
switcher.set_handle(t);
{
pipelines
.write()
.await
.insert(id.to_string(), Arc::new(RwLock::new(switcher)));
}
Ok(warp::reply::json(&ret))
});
let session_end = warp::delete()
.and(with_pipelines(running_pipelines.clone()))
.and(warp::path("session"))
.and(warp::query::<HashMap<String, String>>())
.then(
|pipelines: RunningPipelines, p: HashMap<String, String>| async move {
println!("session end requested OK!: {:?}", p);
match p.get("session_id") {
None => Ok(warp::reply::with_status(
"Please specify session to end",
http::StatusCode::BAD_REQUEST,
)),
Some(id) => {
let mut pipe = pipelines.write().await;
match pipe.entry(String::from(id)) {
Occupied(handle) => {
println!("occupied");
let (k, v) = handle.remove_entry();
drop(pipe);
println!("removed from hashmap, key:{}", k);
let s = v.write().await;
if let Some(h) = &s.handle {
if let Some(tx) = &s.pipeline_end_tx {
match tx.send("goodbye".to_string()) {
Ok(res) => {
println!(
"sent end message|{:?}| to fpipeline: {}",
res, id
);
//Added this to try to get it to at least Error on the other side
drop(tx);
},
Err(err) => println!(
"ERROR sending end message to pipeline({}):{}",
id, err
),
};
} else {
println!("no sender channel found for pipeline: {}", id);
};
h.abort();
} else {
println!(
"no luck finding the value in handle in the switcher: {}",
id
);
};
}
Vacant(_) => {
println!("no luck finding the handle in the pipelines: {}", id)
}
};
Ok(warp::reply::with_status("done", http::StatusCode::OK))
}
}
},
);
let routes = session_create
.or(session_end)
.recover(handle_rejection)
.with(warp::cors().allow_any_origin());
println!("starting server...");
warp::serve(routes).run((ADDR, PORT)).await;
}
async fn handle_rejection(
err: warp::Rejection,
) -> Result<impl warp::Reply, std::convert::Infallible> {
Ok(warp::reply::json(&format!("{:?}", err)))
}
fn with_pipelines(
pipelines: RunningPipelines,
) -> impl Filter<Extract = (RunningPipelines,), Error = std::convert::Infallible> + Clone {
warp::any().map(move || pipelines.clone())
}
depends:
[dependencies]
warp = "0.3"
tokio = { version = "1", features = ["full"] }
uuid = { version = "0.8.2", features = ["serde", "v4"] }
Results when I boot up, send a "create" request, and then an "end" request with the received ID:
starting server...
session requested OK!
Background going...
session end requested OK!: {"session_id": "6b984a45-38d8-41dc-bf95-422f75c5a429"}
occupied
removed from hashmap, key:6b984a45-38d8-41dc-bf95-422f75c5a429
sent end message|()| to fpipeline: 6b984a45-38d8-41dc-bf95-422f75c5a429
You'll notice that the background thread starts (and doesn't end) when the "create" request is made, but when the "end" request is made, while everything appears to complete successfully from the request(web) side, the background thread doesn't ever receive the message. As I've said I've tried all different channel types and moved things around to get it into this configuration... i.e. flattened and thread safetied as much as I could or at least could think of. I'm greener than I would like in rust, so any help would be VERY appreciated!
I think that the issue here is that you are sending the message and then immediately aborting the background task:
tx.send("goodbye".to_string());
//...
h.abort();
And the background task does not have time to process the message, as the abort is of higher priority.
What you need is to join the task, not to abort it.
Curiously, tokio tasks handles do not have a join() method, instead you wait for the handle itself. But for that you need to own the handle, so first you have to extract the handle from the Switcher:
let mut s = v.write().await;
//steal the task handle
if let Some(h) = s.handle.take() {
//...
tx.send("goodbye".to_string());
//...
//join the task
h.await.unwrap();
}
Note that joining a task may fail, in case the task is aborted or panicked. I'm just panicking in the code above, but you may want to do something different.
Or... you could not to wait for the task. In tokio if you drop a task handle, it will be detached. Then, it will finish when it finishes.

Converting any Type to String in Rust

I am new to Rust and often use external crates in my small projects. Sometimes I want to have the corresponding output as a string instead of the returned type or parse it otherwise to modify certain parts of the return.
For example i was using the crate notify and i was getting the paths of the changed files returned as an "event"-type.
This is the Example Code:
extern crate notify;
use notify::{RecommendedWatcher, Watcher, RecursiveMode};
use std::sync::mpsc::channel;
use std::time::Duration;
fn watch() -> notify::Result<()> {
// Create a channel to receive the events.
let (tx, rx) = channel();
// Automatically select the best implementation for your platform.
// You can also access each implementation directly e.g. INotifyWatcher.
let mut watcher: RecommendedWatcher = try!(Watcher::new(tx, Duration::from_secs(2)));
// Add a path to be watched. All files and directories at that path and
// below will be monitored for changes.
try!(watcher.watch("/home/test/notify", RecursiveMode::Recursive));
// This is a simple loop, but you may want to use more complex logic here,
// for example to handle I/O.
loop {
match rx.recv() {
Ok(event) => println!("{:?}", event),
Err(e) => println!("watch error: {:?}", e),
}
}
}
fn main() {
if let Err(e) = watch() {
println!("error: {:?}", e)
}
}
The crate has no display method implemented. How can I convert this event type into a string?
Thanks!
#devyan I had the same problem for a while. Here is what worked.
let (tx, rx) = channel();
// Create a watcher object, delivering debounced events.
// The notification back-end is selected based on the platform.
let mut watcher = watcher(tx, Duration::from_secs(10)).unwrap();
// Add a path to be watched. All files and directories at that path and
// below will be monitored for changes.
watcher.watch(dir, RecursiveMode::Recursive).unwrap();
loop {
match rx.recv() {
Ok(event) => {
//Your debounced events...
println!("{:?}", event);
match event {
DebouncedEvent::Write(filepath_buf) => {
//Your file path...
println!("{:?}", filepath_buf.as_path())
}
_ => {}
}
}
Err(e) => println!("watch error: {:?}", e),
}
}

Why does reading from a Rusoto S3 stream inside an Actix Web handler cause a deadlock?

I'm writing an application using actix_web and rusoto_s3.
When I run a command outside of an actix request directly from main, it runs fine, and the get_object works as expected. When this is encapsulated inside an actix_web request, the stream is blocked forever.
I have a client that is shared for all requests which is encapsulated into an Arc (this happens in actix data internals).
Full code:
fn index(
_req: HttpRequest,
path: web::Path<String>,
s3: web::Data<S3Client>,
) -> impl Future<Item = HttpResponse, Error = actix_web::Error> {
s3.get_object(GetObjectRequest {
bucket: "my_bucket".to_owned(),
key: path.to_owned(),
..Default::default()
})
.and_then(move |res| {
info!("Response {:?}", res);
let mut stream = res.body.unwrap().into_blocking_read();
let mut body = Vec::new();
stream.read_to_end(&mut body).unwrap();
match process_file(body.as_slice()) {
Ok(result) => Ok(result),
Err(error) => Err(RusotoError::from(error)),
}
})
.map_err(|e| match e {
RusotoError::Service(GetObjectError::NoSuchKey(key)) => {
actix_web::error::ErrorNotFound(format!("{} not found", key))
}
error => {
error!("Error: {:?}", error);
actix_web::error::ErrorInternalServerError("error")
}
})
.from_err()
.and_then(move |img| HttpResponse::Ok().body(Body::from(img)))
}
fn health() -> HttpResponse {
HttpResponse::Ok().finish()
}
fn main() -> std::io::Result<()> {
let name = "rust_s3_test";
env::set_var("RUST_LOG", "debug");
pretty_env_logger::init();
let sys = actix_rt::System::builder().stop_on_panic(true).build();
let prometheus = PrometheusMetrics::new(name, "/metrics");
let s3 = S3Client::new(Region::Custom {
name: "eu-west-1".to_owned(),
endpoint: "http://localhost:9000".to_owned(),
});
let s3_client_data = web::Data::new(s3);
Server::build()
.bind(name, "0.0.0.0:8080", move || {
HttpService::build().keep_alive(KeepAlive::Os).h1(App::new()
.register_data(s3_client_data.clone())
.wrap(prometheus.clone())
.wrap(actix_web::middleware::Logger::default())
.service(web::resource("/health").route(web::get().to(health)))
.service(web::resource("/{file_name}").route(web::get().to_async(index))))
})?
.start();
sys.run()
}
In stream.read_to_end the thread is being blocked and never resolved.
I have tried cloning the client per request and also creating a new client per request, but I've got the same result in all scenarios.
Am I doing something wrong?
It works if I don't use it async...
s3.get_object(GetObjectRequest {
bucket: "my_bucket".to_owned(),
key: path.to_owned(),
..Default::default()
})
.sync()
.unwrap()
.body
.unwrap()
.into_blocking_read();
let mut body = Vec::new();
io::copy(&mut stream, &mut body);
Is this an issue with Tokio?
let mut stream = res.body.unwrap().into_blocking_read();
Check the implementation of into_blocking_read(): it calls .wait(). You shouldn't call blocking code inside a Future.
Since Rusoto's body is a Stream, there is a way to read it asynchronously:
.and_then(move |res| {
info!("Response {:?}", res);
let stream = res.body.unwrap();
stream.concat2().map(move |file| {
process_file(&file[..]).unwrap()
})
.map_err(|e| RusotoError::from(e)))
})
process_file should not block the enclosing Future. If it needs to block, you may consider running it on new thread or encapsulate with tokio_threadpool's blocking.
Note: You can use tokio_threadpool's blocking in your implementation, but I recommend you understand how it works first.
If you are not aiming to load the whole file into memory, you can use for_each:
stream.for_each(|part| {
//process each part in here
//Warning! Do not add blocking code here either.
})
See also:
What is the best approach to encapsulate blocking I/O in future-rs?
Why does Future::select choose the future with a longer sleep period first?

How can I use Server-Sent events in Iron?

I have a small Rust application that receives some requests through a serial port, does some processing and saves the results locally. I wanted to use a browser as a remote monitor so I can see everything that is happening and as I understand SSEs are pretty good for that.
I tried using Iron for that but I can't find a way to keep the connection open. The request handlers all need to return a Response, so I can't keep sending data.
This was my (dumb) attempt:
fn monitor(req: &mut Request) -> IronResult<Response> {
let mut headers = Headers::new();
headers.set(ContentType(Mime(TopLevel::Text, SubLevel::EventStream, vec![])));
headers.set(CacheControl(vec![CacheDirective::NoCache]));
println!("{:?}", req);
let mut count = 0;
loop {
let mut response = Response::with((iron::status::Ok, format!("data: Count!:{}", count)));
response.headers = headers.clone();
return Ok(response); //obviously won't do what I want
count += 1;
std::thread::sleep_ms(1000);
}
}
I think the short answer is: you can't. The current version of Iron is built on a single request-response interaction. This can be seen in your code because the only way to send a response is to return it; terminating the handler thread.
There's an issue in Iron to utilize the new async support in Hyper, which itself was merged relatively recently. There are even other people trying to use Server-Send Events in Hyper that haven't succeeded yet.
If you are willing to use the Hyper master branch, something like this seems to work. No guarantees that this is a good solution or that it doesn't eat up all your RAM or CPU. It seems to work in Chrome though.
extern crate hyper;
use std::time::{Duration, Instant};
use std::io::prelude::*;
use hyper::{Control, Encoder, Decoder, Next };
use hyper::server::{Server, HandlerFactory, Handler, Request, Response};
use hyper::status::StatusCode;
use hyper::header::ContentType;
use hyper::net::{HttpStream};
fn main() {
let address = "0.0.0.0:7777".parse().expect("Invalid address");
let server = Server::http(&address).expect("Invalid server");
let (_listen, server_loop) = server.handle(MyFactory).expect("Failed to handle");
println!("Starting...");
server_loop.run();
}
struct MyFactory;
impl HandlerFactory<HttpStream> for MyFactory {
type Output = MyHandler;
fn create(&mut self, ctrl: Control) -> Self::Output {
MyHandler {
control: ctrl,
}
}
}
struct MyHandler {
control: Control,
}
impl Handler<HttpStream> for MyHandler {
fn on_request(&mut self, _request: Request<HttpStream>) -> Next {
println!("A request was made");
Next::write()
}
fn on_request_readable(&mut self, _request: &mut Decoder<HttpStream>) -> Next {
println!("Request has data to read");
Next::write()
}
fn on_response(&mut self, response: &mut Response) -> Next {
println!("A response is ready to be sent");
response.set_status(StatusCode::Ok);
let mime = "text/event-stream".parse().expect("Invalid MIME");
response.headers_mut().set(ContentType(mime));
every_duration(Duration:: from_secs(1), self.control.clone());
Next::wait()
}
fn on_response_writable(&mut self, response: &mut Encoder<HttpStream>) -> Next {
println!("A response can be written");
// Waited long enough, send some data
let fake_data = r#"event: userconnect
data: {"username": "bobby", "time": "02:33:48"}"#;
println!("Writing some data");
response.write_all(fake_data.as_bytes()).expect("Failed to write");
response.write_all(b"\n\n").expect("Failed to write");
Next::wait()
}
}
use std::thread;
fn every_duration(max_elapsed: Duration, control: Control) {
let mut last_sent: Option<Instant> = None;
let mut count = 0;
thread::spawn(move || {
loop {
// Terminate after a fixed number of messages
if count >= 5 {
println!("Maximum messages sent, ending");
control.ready(Next::end()).expect("Failed to trigger end");
return;
}
// Wait a little while between messages
if let Some(last) = last_sent {
let elapsed = last.elapsed();
println!("It's been {:?} since the last message", elapsed);
if elapsed < max_elapsed {
let remaining = max_elapsed - elapsed;
println!("There's {:?} remaining", remaining);
thread::sleep(remaining);
}
}
// Trigger a message
control.ready(Next::write()).expect("Failed to trigger write");
last_sent = Some(Instant::now());
count += 1;
}
});
}
And the client-side JS:
var evtSource = new EventSource("http://127.0.0.1:7777");
evtSource.addEventListener("userconnect", function(e) {
const obj = JSON.parse(e.data);
console.log(obj);
}, false);

Resources