How to upload a file using rust to sftp.
This is the only useful link i have found: openssh_sftp_client. But the minimal documentation surrounding usage of this library is making it really difficult
Note: I am not talking about uploading to sftp using cli like sftp or rftp
I tried two crates ssh2 and rust-ftp but i am getting error:
ssh2:
use std::io::prelude::*;
use std::net::TcpStream;
use std::path::Path;
use ssh2::Session;
// Connect to the local SSH server
let tcp = TcpStream::connect("SFTP_IP:PORT").unwrap();
let mut sess = Session::new().unwrap();
sess.set_tcp_stream(tcp);
sess.handshake().unwrap();
sess.userauth_agent("username").unwrap();
// Write the file
let mut remote_file = sess.scp_send(Path::new("remote"),
0o644, 10, None).unwrap();
remote_file.write(b"1234567890").unwrap();
// Close the channel and wait for the whole content to be tranferred
remote_file.send_eof().unwrap();
remote_file.wait_eof().unwrap();
remote_file.close().unwrap();
remote_file.wait_close().unwrap();
ERROR:
rust-ftp:
use std::str;
use std::io::Cursor;
use ftp::FtpStream;
fn main() {
// Create a connection to an FTP server and authenticate to it.
let mut ftp_stream = FtpStream::connect("SFTP_IP:PORT").unwrap();
let _ = ftp_stream.login("username", "password").unwrap();
// Get the current directory that the client will be reading from and writing to.
println!("Current directory: {}", ftp_stream.pwd().unwrap());
// Change into a new directory, relative to the one we are currently in.
let _ = ftp_stream.cwd("test_data").unwrap();
// Retrieve (GET) a file from the FTP server in the current working directory.
let remote_file = ftp_stream.simple_retr("ftpext-charter.txt").unwrap();
println!("Read file with contents\n{}\n", str::from_utf8(&remote_file.into_inner()).unwrap());
// Store (PUT) a file from the client to the current working directory of the server.
let mut reader = Cursor::new("Hello from the Rust \"ftp\" crate!".as_bytes());
let _ = ftp_stream.put("greeting.txt", &mut reader);
println!("Successfully wrote greeting.txt");
// Terminate the connection to the server.
let _ = ftp_stream.quit();
}
ERROR:
I have managed to find a workaround, but this is not ideal.
Instead of creating a file locally and then uploading to sftp,
I am writing the file directly in sftp.
use std::net::TcpStream;
use ssh2::Session;
use std::path::Path;
let tcp = TcpStream::connect("IP:PORT")).unwrap();
let mut sess = Session::new().unwrap();
sess.set_tcp_Stream(tcp);
sess.handshake().unwrap();
sess.userauth_password("USER", "PSWD").unwrap();
let sftp = sess.sftp().unwrap();
sftp.mkdir(Path::new("path/to/sftp/dir"), 0o777).ok();
sftp.create(&Path::new("path/to/file/in/sftp/dir/file.json"))
.unwrap()
.write_all("text to be written to file")
.unwrap();
Related
I tried to perform a web post request in rust with the ntlm authentification but I can't figure out.
Here is my actual code:
let mut map = HashMap::new();
map.insert("test", "test");
let client = reqwest::Client::new();
let res = client.post(format!("{}/create", URL))
.json(&map)
.send()
.await?;
So to add the ntlm authentication, I tried the curl crate with the "ntlm" feature enabled:
let mut easy = Easy::new();
easy.url(format!("{}/create", URL)).unwrap();
easy.post(true).unwrap();
easy.post_field_size(map.len() as u64).unwrap();
let mut transfer = easy.transfer();
transfer.read_function(|buf| {
Ok(map.read(buf).unwrap_or(0))
}).unwrap();
transfer.perform().unwrap();
Sadly, I got this error:
Ok(map.read(buf).unwrap_or(0))
| ^^^^ method not found in `HashMap<&str, String>`
I understand that I need to convert my HashMap as bytes, but I can't find an easy way to do that, I tried "as_bytes()" but it doesn't work also.
My last question, this crate is a good way to do my ntlm authentication automatically?
I'm making a scraper and want to download the HTML source of a website so that it can be parsed. I'm using fantoccini with WebDriver to log in to the site. I have an asynchronous funct
Now that I'm logged in, what do I need to do to extract the HTML source?
What I have so far is this:
let htmlsrc = client.source();
let mut file = File::create("htmlsrc.html").unwrap();
fs::write("htmlsrc.txt", htmlsrc).await?;
However this gives me this error:
error[E0277]: the trait bound `impl Future: AsRef<[u8]>` is not satisfied
--> src/main.rs:44:30
|
44 | fs::write("htmlsrc.txt", htmlsrc).await?;
| --------- ^^^^^^^ the trait `AsRef<[u8]>` is not implemented for `impl Future`
I'm new to Rust, so I'm not very sure of what I'm doing.
Any help would be appreciated!
The full code is this:
use fantoccini::{ClientBuilder, Locator, Client};
use std::time::Duration;
use tokio::time::sleep;
use std::fs::File;
use tokio::fs;
// let's set up the sequence of steps we want the browser to take
#[tokio::main]
async fn main() -> Result<(), fantoccini::error::CmdError> {
let s_email = "email";
let password = "pass;
// Connecting using "native" TLS (with feature `native-tls`; on by default)
let mut c = ClientBuilder::native()
.connect("http://localhost:4444").await
.expect("failed to connect to WebDriver");
// first, go to the Managebac login page
c.goto("https://bavarianis.managebac.com/login").await?;
// define email field with css selector
let mut email_field = c.wait().for_element(Locator::Css("#session_login")).await?;
// type in email
email_field.send_keys(s_email).await?;
// define email field with css selector
let mut pass_field = c.wait().for_element(Locator::Css("#session_password")).await?;
// type in email
pass_field.send_keys(password).await?;
// define sign in button with xpath
let signin_button = "/html/body/main/div/div/form/div[2]/input";
let signin_button = c.wait().for_element(Locator::XPath(signin_button)).await?;
// click sign in
signin_button.click().await?;
let htmlsrc = c.source();
let mut file = File::create("htmlsrc.html").unwrap();
fs::write("htmlsrc.txt", htmlsrc).await?;
//temp to observe
sleep(Duration::from_millis(6000)).await;
//c.close().await?;
Ok(())
}
I'm trying to make a server in Rust using tcp protocol. I can make a normal server, through the language documentation, but I don't want that whenever a new connection is made, a new thread is created, nor do I want to use a thread pool, because the tcp connections will be persistent, that is, they will last a long time (around 30min-2h). So, I looped over all the connections and, with a 1 millisecond timeout, I try to read if there are any new packets. However, something tells me this is not the right thing to do. Any idea?
Thanks in advance.
You are probably looking for some asynchronous runtime. Like most runtimes, tokio can be customized to work with a single thread, if you don't have many connections you centainly don't need more than one. If we translate the example #Benjamin Boortz provided:
use tokio::io::*;
use tokio::net::{TcpListener, TcpStream};
#[tokio::main(flavor = "current_thread")]
async fn main() {
let listener = TcpListener::bind("127.0.0.1:7878").await.unwrap();
while let Ok((stream, _address)) = listener.accept().await {
// this is similar to spawning a new thread.
tokio::spawn(handle_connection(stream));
}
}
async fn handle_connection(mut stream: TcpStream) {
let mut buffer = [0; 1024];
stream.read(&mut buffer).await.unwrap();
println!("Request: {}", String::from_utf8_lossy(&buffer[..]));
}
This code is concurrent, yet single threaded, which seems to be what you want. I recommend you check the tokio tutorial. It is a really good resource if you are unfamiliar with asynchronous programming in Rust.
not sure if this is what you need but you can find in "The Rust Programming Language" Book an example for a single threaded TCP Server. https://doc.rust-lang.org/book/ch20-01-single-threaded.html
Please refer to Listing 20-2:
use std::io::prelude::*;
use std::net::TcpListener;
use std::net::TcpStream;
fn main() {
let listener = TcpListener::bind("127.0.0.1:7878").unwrap();
for stream in listener.incoming() {
let stream = stream.unwrap();
handle_connection(stream);
}
}
fn handle_connection(mut stream: TcpStream) {
let mut buffer = [0; 1024];
stream.read(&mut buffer).unwrap();
println!("Request: {}", String::from_utf8_lossy(&buffer[..]));
}
Ref: https://doc.rust-lang.org/book/ch20-01-single-threaded.html#reading-the-request
I'm having trouble writing a few strings into my .txt file without them overwriting each other
This is an example:
for i in 1..100{
fs::write("*PATH*", i.to_string()).expect("Unable to write file");
}
I'm thinking it should write every singe one right after each other but it doesn't! It overwrites and when I open the document it's just the last written number.
I couldn't find anything on the web since this way of writing into files seems to be rather new.
You can open the File before entering the loop. You can further simplify writing to the file, by using the write! and writeln! macros. Which allows you to utilize Rust's formatting functionality avoiding the need to explicitly do i.to_string().
Since you're performing a lot of (small) writes, then consider also wrapping it in a BufWriter to minimize the amount of total system calls performed.
use std::fs::File;
use std::io::{BufWriter, Write};
fn main() {
let path = "...";
let f = File::create(path).expect("unable to create file");
let mut f = BufWriter::new(f);
for i in 1..100 {
write!(f, "{}", i).expect("unable to write");
}
}
If the file already exists, and you want to continuously append to it every time you execute your program. Then you can open it using OpenOptions and specifically enabling append mode by using append(true):
use std::fs::OpenOptions;
use std::io::{BufWriter, Write};
fn main() {
let path = "...";
let f = OpenOptions::new()
.write(true)
.append(true)
.open(path)
.expect("unable to open file");
let mut f = BufWriter::new(f);
for i in 1..100 {
write!(f, "{}", i).expect("unable to write");
}
}
I couldn't find anything on the web since this way of writing into files seems to be rather new.
It's not rather new, it's rather wrong (for this use case). Open the file beforehand and write to that so it appends, instead of calling fs::write every single loop, which will reopen and close the file every iteration, which is not only slow, but causes your file to get overwritten:
use std::fs::OpenOptions;
use std::io::prelude::*;
let file = OpenOptions::new()
.write(true)
.open("/path/to/file")
.expect("Could not open file");
for i in 1..100 {
file.write_all(i.to_string().as_bytes()).expect("Unable to write to file");
}
I want to download large files (500mb) with hyper, and be able to resume if the download fails.
Is there any way with hyper to run some function for each chunk of data received? The send() method returns a Result<Response>, but I can't find any methods on Response that return an iterator over chunks. Ideally I'd be able to do something like:
client.get(&url.to_string())
.send()
.map(|mut res| {
let mut chunk = String::new();
// write this chunk to disk
});
Is this possible, or will map only be called once hyper has downloaded the entire file?
Is there any way with hyper to run some function for each chunk of data received?
Hyper's Response implements Read. It means that Response is a stream and you can read arbitrary chunks of data from it as you would usually do with a stream.
For what it's worth, here's a piece of code I use to download large files from ICECat. I'm using the Read interface in order to display the download progress in the terminal.
The variable response here is an instance of Hyper's Response.
{
let mut file = try_s!(fs::File::create(&tmp_path));
let mut deflate = try_s!(GzDecoder::new(response));
let mut buf = [0; 128 * 1024];
let mut written = 0;
loop {
status_line! ("icecat_fetch] " (url) ": " (written / 1024 / 1024) " MiB.");
let len = match deflate.read(&mut buf) {
Ok(0) => break, // EOF.
Ok(len) => len,
Err(ref err) if err.kind() == io::ErrorKind::Interrupted => continue,
Err(err) => return ERR!("{}: Download failed: {}", url, err),
};
try_s!(file.write_all(&buf[..len]));
written += len;
}
}
try_s!(fs::rename(tmp_path, target_path));
status_line_clear();
I want to download large files (500mb) with hyper, and be able to resume if the download fails.
This is usually implemented with the HTTP "Range" header (cf. RFC 7233).
Not every server out there supports the "Range" header. I've seen a lot of servers with a custom HTTP stack and without the proper "Range" support, or with the "Range" header disabled for some reason. So skipping the Hyper's Response chunks might be a necessary fallback.
But if you want to speed things up and save traffic then the primary means of resuming a stopped download should be by using the "Range" header.
P.S. With Hyper 0.12 the response body returned by the Hyper is a Stream and to run some function for each chunk of data received we can use the for_each stream combinator:
extern crate futures;
extern crate futures_cpupool;
extern crate hyper; // 0.12
extern crate hyper_rustls;
use futures::Future;
use futures_cpupool::CpuPool;
use hyper::rt::Stream;
use hyper::{Body, Client, Request};
use hyper_rustls::HttpsConnector;
use std::thread;
use std::time::Duration;
fn main() {
let url = "https://steemitimages.com/DQmYWcEumaw1ajSge5PcGpgPpXydTkTcqe1daF4Ro3sRLDi/IMG_20130103_103123.jpg";
// In real life we'd want an asynchronous reactor, such as the tokio_core, but for a short example the `CpuPool` should do.
let pool = CpuPool::new(1);
let https = HttpsConnector::new(1);
let client = Client::builder().executor(pool.clone()).build(https);
// `unwrap` is used because there are different ways (and/or libraries) to handle the errors and you should pick one yourself.
// Also to keep this example simple.
let req = Request::builder().uri(url).body(Body::empty()).unwrap();
let fut = client.request(req);
// Rebinding (shadowing) the `fut` variable allows us (in smart IDEs) to more easily examine the gradual weaving of the types.
let fut = fut.then(move |res| {
let res = res.unwrap();
println!("Status: {:?}.", res.status());
let body = res.into_body();
// `for_each` returns a `Future` that we must embed into our chain of futures in order to execute it.
body.for_each(move |chunk| {println!("Got a chunk of {} bytes.", chunk.len()); Ok(())})
});
// Handle the errors: we need error-free futures for `spawn`.
let fut = fut.then(move |r| -> Result<(), ()> {r.unwrap(); Ok(())});
// Spawning the future onto a runtime starts executing it in background.
// If not spawned onto a runtime the future will be executed in `wait`.
//
// Note that we should keep the future around.
// To save resources most implementations would *cancel* the dropped futures.
let _fut = pool.spawn(fut);
thread::sleep (Duration::from_secs (1)); // or `_fut.wait()`.
}