How can I print out a command's output?
let mut telnet_scan_command = Command::new("telnet");
telnet_scan_command.arg("0.0.0.0");
match telnet_scan_command.output() {
Ok(o) => {
println!("{}", String::from_utf8_unchecked(o.stdout))
}
Err(e) => {
println!("Something Went wrong Log: {} ", e)
}
The program runs perfectly fine but the output should be:
Trying 0.0.0.0...
telnet: Unable to connect to remote host: Connection refused
The actual output is:
Trying 0.0.0.0...
How can I get the rest of the output?
Related
I'm trying to receive a message (server side) on the network using TcpListener in Rust.
Here's the server code:
// Instanciate TcpListener
let server = TcpListener::bind("0.0.0.0:52001").expect("Could not bind");
match server.accept() { // Waiting new connection from a client
Ok((stream, addr)) => {
println!("New connection from {}", addr);
// Wrapping the stream in a BufReader
let mut reader = BufReader::new(&stream);
let mut message = String::new();
loop { // Entering the loop when a client is connected
reader.read_line(&mut message).expect("Cannot read new line");
println!("message received: {}", message);
message.clear();
}
}
Err(e) => {
println!("Fail: {:?}", e)
}
}
Here's my Kotlin client:
Socket("192.168.134.138", 52001).use { client ->
client.getOutputStream().use { out ->
out.write("test\n".toByteArray())
}
}
while(true) {
Thread.sleep(15_000)
}
The client send the following line: test\n and it ends with a linebreak for the server to read.
The intended behaviours would be that on the server side it prints message received: test and then the server waits at the read_line() instruction for the next line
It works because I receive the test but the read_line() method does not seem to block nor wait for another message. So it creates an infinite loop. So in the terminal I'm getting:
New connection from 192.168.134.123:7869
message received: test
message received:
message received:
message received:
message received:
Process finished with exit code 130 (interrupted by signal 2: SIGINT)
And I have to stop the program forcefully.
Any ideas?
To detect the end of the stream, you need to check if read_line() returned Ok(0):
From the docs:
If this function returns Ok(0), the stream has reached EOF.
loop { // Entering the loop when a client is connected
let mut message = String::new();
if reader.read_line(&mut message).expect("Cannot read new line") == 0 {
break;
}
println!("message received: {}", message);
}
Another way option is to use BufReader::lines() iterator:
for line in reader.lines() {
let message = line.expect("Cannot read new line");
println!("message received: {}", message);
}
This approach is a bit inefficient as it allocates a new String on every iteration. For best performance, you should allocate a single String and reuse it like #BlackBeans pointed out in a comment:
let mut message = String::new();
loop { // Entering the loop when a client is connected
message.clear();
if reader.read_line(&mut message).expect("Cannot read new line") == 0 {
break;
}
println!("message received: {}", message);
}
I run ping pong example in two different console windows like described in that tutorial.
use libp2p::futures::StreamExt;
use libp2p::ping::{Ping, PingConfig};
use libp2p::swarm::{Swarm, SwarmEvent};
use libp2p::{identity, Multiaddr, PeerId};
use std::error::Error;
#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
let local_key = identity::Keypair::generate_ed25519();
let local_peer_id = PeerId::from(local_key.public());
println!("Local peer id: {:?}", local_peer_id);
let transport = libp2p::development_transport(local_key).await?;
let behaviour = Ping::new(PingConfig::new().with_keep_alive(true));
let mut swarm = Swarm::new(transport, behaviour, local_peer_id);
swarm.listen_on("/ip4/0.0.0.0/tcp/0".parse()?)?;
if let Some(addr) = std::env::args().nth(1) {
let remote: Multiaddr = addr.parse()?;
swarm.dial(remote)?;
println!("Dialed {}", addr)
}
loop {
match swarm.select_next_some().await {
SwarmEvent::NewListenAddr { address, .. } => println!("Listening on {:?}", address),
SwarmEvent::Behaviour(event) => println!("{:?}", event),
_ => {}
}
}
}
console 1:
cargo run
Local peer id: PeerId("12D3KooWGi2AZgQL5sXDx4WYcPbtpXQDJJf2JTzEfHMKoYARohUN")
Listening on "/ip4/127.0.0.1/tcp/53912"
Listening on "/ip4/192.168.100.4/tcp/53912"
Event { peer: PeerId("12D3KooWQTRrShA41Kh7qoQYUc3G63uDZcXPSFw68AmzZqmaaHSh"), result: Ok(Pong) }
Event { peer: PeerId("12D3KooWQTRrShA41Kh7qoQYUc3G63uDZcXPSFw68AmzZqmaaHSh"), result: Ok(Ping { rtt: 1.112947ms }) }
Event { peer: PeerId("12D3KooWQTRrShA41Kh7qoQYUc3G63uDZcXPSFw68AmzZqmaaHSh"), result: Ok(Pong) }
Event { peer: PeerId("12D3KooWQTRrShA41Kh7qoQYUc3G63uDZcXPSFw68AmzZqmaaHSh"), result: Ok(Ping { rtt: 1.682508ms }) }
console 2
cargo run -- /ip4/192.168.100.4/tcp/53912
Local peer id: PeerId("12D3KooWQTRrShA41Kh7qoQYUc3G63uDZcXPSFw68AmzZqmaaHSh")
Dialed /ip4/192.168.100.4/tcp/53912
Listening on "/ip4/127.0.0.1/tcp/53913"
Listening on "/ip4/192.168.100.4/tcp/53913"
Event { peer: PeerId("12D3KooWGi2AZgQL5sXDx4WYcPbtpXQDJJf2JTzEfHMKoYARohUN"), result: Ok(Pong) }
Event { peer: PeerId("12D3KooWGi2AZgQL5sXDx4WYcPbtpXQDJJf2JTzEfHMKoYARohUN"), result: Ok(Ping { rtt: 869.493µs }) }
Event { peer: PeerId("12D3KooWGi2AZgQL5sXDx4WYcPbtpXQDJJf2JTzEfHMKoYARohUN"), result: Ok(Pong) }
Event { peer: PeerId("12D3KooWGi2AZgQL5sXDx4WYcPbtpXQDJJf2JTzEfHMKoYARohUN"), result: Ok(Ping { rtt: 2.109459ms }) }
What confuses me is the order of ping and pong messages. This example creates two nodes which send ping pong messages each other. But why do they start with Pong msg every time I run these nodes? I expect first Ping and then Pong event. Do I miss something?
That seems to simply be down to the library's semantics: if you go look at the documentation for PingSuccess (the "success" side of a ping operation), the two variants are:
Pong Received a ping and sent back a pong.
Ping Sent a ping and received back a pong. Includes the round-trip time.
So the events are not asymmetric: while "pong" means "I received a ping and sent a pong", "ping" means I received a response to my ping, which is why it can report the RTT. And so the timeline is
ping ->
<- pong
triggers PingSuccess::Pong
triggers PingSuccess::Ping
Thus the order you can see here.
If you take a look at the protocol source, you will notices that the protocol always pings first. But, in the example only the incoming events are logged. So, both nodes first send a ping which is not logged, the start receiving the pong responses and the ping requests.
let addr: SocketAddr = self.listen_bind.parse().unwrap();
let mut listener = TcpListener::bind(&addr).await?;
info!("Nightfort listening on {}", addr);
loop {
info!("debug1");
match listener.accept().await {
Ok((stream, addr)) => {
info!("debug2");
let watcher = self.watcher.clone();
info!("debug3");
tokio::spawn(async move {
info!("debug4");
if let Err(e) = Nightfort::process(watcher, stream, addr).await {
error!("Error on this ranger: {}, error: {:?}", addr, e);
}
});
}
Err(e) => error!("Socket conn error {}", e),
}
// let (stream, addr) = listener.accept().await?;
}
I spent two days on troubleshooting this weird issue. The process in rust can run very well on my local macos, linux, docker on linux, but can not run on aws linux or k8s on aws. The main issues I found is: the process hang on accept() even a client thinks it established a connection to the server and started sending messages to it. ps show the the server process is in S status. The code was written in nightly rust with alpha libs, and I thought there might be a bug in the dependency, then I updated my code and switch it to stable rust with the latest release of dependencies, but the issue is still there.
I've got a problem with running logstash. My conf looks like this:
input {
udp {
port => 1514
type => docker
}
}
filter {
grok {
match => {
"message" => "<%{NUMBER}>%{DATA}(?:\s+)%{DATA:hostname}(?:\s+)%{DATA:imageName}(?:\s+)%{DATA:containerName}(?:\s*\[%{NUMBER}\]:) (\s+(?<logDate>%{YEAR}-%{MONTHNUM}-%{MONTHDAY}\s+%{HOUR}:%{MINUTE}:%{SECOND}) %{LOGLEVEL:logLevel}(?:\s*);* %{DATA:logAdditionalInfo};*)?%{GREEDYDATA:logMsg}"
}
keep_empty_captures => false
remove_field => ["message"]
}
}
output {
if [type] == "gelf" {
elasticsearch {
index => "phpteam-%{+YYYY.MM.dd}"
}
} else {
elasticsearch { }
}
}
The configuration is correct, but after running it /var/log/logstash/logstash.log shows the following output:
{:timestamp=>"2016-06-22T11:43:03.105000+0200", :message=>"SIGTERM
received. Shutting down the pipeline.", :level=>:warn}
{:timestamp=>"2016-06-22T11:43:03.532000+0200", :message=>"UDP
listener died", :exception=>#,
:backtrace=>["org/jruby/RubyIO.java:3682:in select'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-2.0.3/lib/logstash/inputs/udp.rb:77:in
udp_listener'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-2.0.3/lib/logstash/inputs/udp.rb:50:in
run'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/pipeline.rb:206:in
inputworker'",
"/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/pipeline.rb:199:in
`start_input'"], :level=>:warn}
The only thing I found to workaround this error is to edit those .rb files, but sadly I have no idea how to do it. Could you help me somehow?
Thanks in advance.
I found solution that is not perfect, but works, so maybe it will help somebody.
After installing the whole instance on new server everything works fine.
Everything crashed after upgrade'ing logstash/elasticsearch/kibana, so maybe there's something wrong with configuration files, but I couldn't figure out which ones.
I want to execute an external program via std::process::Command::spawn. Furthermore I want to know the reason why spawning the process failed: is it because the given program name doesn't exist/is not in PATH or because of some different error?
Example code of what I want to achieve:
match Command::new("rustc").spawn() {
Ok(_) => println!("Was spawned :)"),
Err(e) => {
if /* ??? */ {
println!("`rustc` was not found! Check your PATH!")
} else {
println!("Some strange error occurred :(");
}
},
}
When I try to execute a program that isn't on my system, I get:
Error { repr: Os { code: 2, message: "No such file or directory" } }
But I don't want to rely on that. Is there a way to determine if a program exists in PATH?
You can use e.kind() to find what ErrorKind the error was.
match Command::new("rustc").spawn() {
Ok(_) => println!("Was spawned :)"),
Err(e) => {
if let NotFound = e.kind() {
println!("`rustc` was not found! Check your PATH!")
} else {
println!("Some strange error occurred :(");
}
},
}
Edit: I didn't find any explicit documentation about what error kinds can be returned, so I looked up the source code. It seems the error is returned straight from the OS. The relevant code seems to be in src/libstd/sys/[unix/windows/..]/process.rs. A snippet from the Unix version:
One more edit: On a second thought, I'm not sure if the licenses actually allows posting parts of Rust sources here, so you can see it on github
Which just returns Error::from_raw_os_err(...). The Windows version seemed more complicated, and I couldn't immediately find where it even returns errors from. Either way, it seems you're at the mercy of your operating system regarding that. At least I found the following test in src/libstd/process.rs:
Same as above: github
That seems to guarantee that an ErrorKind::NotFound should be returned at least when the binary is not found. It makes sense to assume that the OS wouldn't give a NotFound error in other cases, but who knows. If you want to be absolutely sure that the program really was not found, you'll have to search the directories in $PATH manually. Something like:
use std::env;
use std::fs;
fn is_program_in_path(program: &str) -> bool {
if let Ok(path) = env::var("PATH") {
for p in path.split(":") {
let p_str = format!("{}/{}", p, program);
if fs::metadata(p_str).is_ok() {
return true;
}
}
}
false
}
fn main() {
let program = "rustca"; // shouldn't be found
if is_program_in_path(program) {
println!("Yes.");
} else {
println!("No.");
}
}