I'm trying to use opentelemetry and open telemetry-otlp to provide observability data over OTLP to Honeycomb.
I'm using something like this as a proof of concept (extracted out into this repo if you want to run it: https://github.com/timfpark/honeycomb-rust-poc)
fn init_tracer(metadata: &MetadataMap) -> Result<sdktrace::Tracer, TraceError> {
let opentelemetry_endpoint =
env::var("OTEL_ENDPOINT").unwrap_or_else(|_| "https://api.honeycomb.io".to_owned());
let opentelemetry_endpoint =
Url::parse(&opentelemetry_endpoint).expect("OTEL_ENDPOINT is not a valid url");
opentelemetry_otlp::new_pipeline()
.tracing()
.with_exporter(
opentelemetry_otlp::new_exporter()
.tonic()
.with_endpoint(opentelemetry_endpoint.as_str())
.with_metadata(metadata.clone())
.with_tls_config(
ClientTlsConfig::new().domain_name(
opentelemetry_endpoint
.host_str()
.expect("OTEL_ENDPOINTshould have a valid host"),
),
),
)
.install_batch(opentelemetry::runtime::Tokio)
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let mut metadata = MetadataMap::with_capacity(2);
metadata.insert(
"x-honeycomb-team",
"...honeycomb api key...".parse().unwrap(),
);
metadata.insert("x-honeycomb-dataset", "my-api".parse().unwrap());
let tracer = init_tracer(&metadata).expect("failed to instantiate opentelemetry tracing");
tracing_subscriber::registry()
.with(tracing_subscriber::EnvFilter::from_default_env())
.with(tracing_opentelemetry::layer().with_tracer(tracer))
.with(tracing_subscriber::fmt::layer())
.try_init()
.expect("failed to register tracer with registry");
let tracer = global::tracer("ex.com/basic");
but I am getting:
2022-11-02T17:01:01.088429Z DEBUG hyper::client::connect::http: connecting to 52.5.162.226:443
2022-11-02T17:01:01.170767Z DEBUG hyper::client::connect::http: connected to 52.5.162.226:443
2022-11-02T17:01:01.171870Z DEBUG rustls::client::hs: No cached session for DnsName(DnsName(DnsName("api.honeycomb.io")))
2022-11-02T17:01:01.172555Z DEBUG rustls::client::hs: Not resuming any session
2022-11-02T17:01:01.269218Z DEBUG rustls::client::hs: ALPN protocol is Some(b"h2")
2022-11-02T17:01:01.269398Z DEBUG rustls::client::hs: Using ciphersuite TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
2022-11-02T17:01:01.269504Z DEBUG rustls::client::tls12::server_hello: Server supports tickets
2022-11-02T17:01:01.269766Z DEBUG rustls::client::tls12: ECDHE curve is ECParameters { curve_type: NamedCurve, named_group: secp256r1 }
2022-11-02T17:01:01.269843Z DEBUG rustls::client::tls12: Server DNS name is DnsName(DnsName(DnsName("api.honeycomb.io")))
2022-11-02T17:01:01.271123Z WARN rustls::conn: Sending fatal alert BadCertificate
2022-11-02T17:01:01.271861Z DEBUG tonic::transport::service::reconnect: reconnect::poll_ready: hyper::Error(Connect, Custom { kind: InvalidData, error: InvalidCertificateData("invalid peer certificate: UnknownIssuer") })
2022-11-02T17:01:01.271967Z DEBUG tower::buffer::worker: service.ready=true processing request
2022-11-02T17:01:01.272169Z DEBUG tonic::transport::service::reconnect: error: error trying to connect: invalid peer certificate contents: invalid peer certificate: UnknownIssuer
OpenTelemetry trace error occurred. Exporter otlp encountered the following error(s): the grpc server returns error (The service is currently unavailable): , detailed error message: error trying to connect: invalid peer certificate contents: invalid peer certificate: UnknownIssuer
which seems to indicate that something about my TLS setup is not correct... Does anyone have a snippet of opentelemetry code in Rust that works with Honeycomb?
The problem is: you need to give ClientTlsConfig a root certificate, that the target site (api.honeycomb.io) chains back to.
I found a suitable root cert installed in my container, and then made the program load it in.
Here's the code:
let pem = tokio::fs::read("/etc/ssl/certs/Starfield_Services_Root_Certificate_Authority_-_G2.pem").await.expect("read the cert file");
let cert = Certificate::from_pem(pem);
let mut metadata = MetadataMap::with_capacity(1);
metadata.insert("x-honeycomb-team", honeycomb_api_key.parse().unwrap());
let opentelemetry_endpoint =
env::var("OTEL_ENDPOINT").unwrap_or_else(|_| "https://api.honeycomb.io".to_owned());
let opentelemetry_endpoint =
Url::parse(&opentelemetry_endpoint).expect("OTEL_ENDPOINT is not a valid url");
opentelemetry_otlp::new_pipeline()
.tracing()
.with_exporter(
opentelemetry_otlp::new_exporter()
.tonic()
.with_endpoint(opentelemetry_endpoint.as_str())
.with_metadata(metadata.clone())
.with_tls_config(
ClientTlsConfig::new().ca_certificate(cert)
),
)
.install_batch(opentelemetry::runtime::Tokio)
}
The first two lines are new; they load a root certificate from the filesystem.
Then use that to configure the ClientTlsConfig.
I chose that root certificate file based on the output of the certificate details:
openssl s_client -connect api.honeycomb.io:443 -servername localhost
The last entry in the certificate chain resembled the filename. It included: /CN=Starfield Services Root Certificate Authority - G2
For reference, a complete solution to Rust/Honeycomb/Tracing/OpenTelemetry implementation is now available here: https://github.com/Dhghomon/rust_opentelemetry_honeycomb.
It does not need to use certs.
UPDATE: Expanding a little since this is the accepted answer now.
The core tracer initialization code is as follows:
fn init_tracer() -> Result<sdktrace::Tracer, TraceError> {
opentelemetry_otlp::new_pipeline()
.tracing()
.with_exporter(
opentelemetry_otlp::new_exporter()
.http()
.with_endpoint("https://api.honeycomb.io/v1/traces")
.with_http_client(reqwest::Client::default())
.with_headers(HashMap::from([
("x-honeycomb-dataset".into(), DATASET_NAME.into()),
("x-honeycomb-team".into(), API_KEY.into()),
]))
.with_timeout(std::time::Duration::from_secs(2)),
) // Replace with runtime::Tokio if using async main
.install_batch(opentelemetry::runtime::TokioCurrentThread)
}
The subscriber is configured like this:
let tracer = init_tracer().unwrap();
let telemetry = tracing_opentelemetry::layer().with_tracer(tracer);
let subscriber = tracing_subscriber::Registry::default().with(telemetry);
tracing::subscriber::set_global_default(subscriber).unwrap();
In addition to #jessitron's answer I was also able to get Honeycomb working with a Rust service using a separate OpenTelemetry collector service. The instructions for how to configure that collector within a Kubernetes cluster are here, also by #jessitron.
With that, then you can use a simpler configuration for reporting traces to the collector, Ala:
use opentelemetry::sdk::trace as sdktrace;
use opentelemetry::trace::{TraceContextExt, TraceError, Tracer};
use opentelemetry::{global, Key};
use opentelemetry_otlp::WithExportConfig;
use std::env;
use tonic::metadata::MetadataMap;
use tonic::transport::ClientTlsConfig;
use tracing_subscriber::prelude::*;
use url::Url;
fn init_tracer() -> Result<sdktrace::Tracer, TraceError> {
// Configure OTEL_ENDPOINT to point at your collector
let opentelemetry_endpoint =
env::var("OTEL_ENDPOINT").unwrap_or_else(|_| "http://localhost:4317".to_owned());
let opentelemetry_endpoint =
Url::parse(&opentelemetry_endpoint).expect("OTEL_ENDPOINT is not a valid url");
opentelemetry_otlp::new_pipeline()
.tracing()
.with_exporter(
opentelemetry_otlp::new_exporter()
.tonic()
.with_endpoint(opentelemetry_endpoint.as_str()),
)
.install_batch(opentelemetry::runtime::Tokio)
}
#[tokio::main]
async fn main() {
let tracer = init_tracer().expect("failed to instantiate opentelemetry tracing");
tracing_subscriber::registry()
.with(tracing_subscriber::EnvFilter::from_default_env())
.with(tracing_opentelemetry::layer().with_tracer(tracer))
.with(tracing_subscriber::fmt::layer())
.try_init()
.expect("failed to register tracer with registry");
const LEMONS_KEY: Key = Key::from_static_str("lemons");
const ANOTHER_KEY: Key = Key::from_static_str("ex.com/another");
let tracer = global::tracer("ex.com/basic");
tracer.in_span("operation", |cx| {
let span = cx.span();
span.add_event(
"Nice operation!".to_string(),
vec![Key::new("bogons").i64(100)],
);
span.set_attribute(ANOTHER_KEY.string("yes"));
tracer.in_span("Sub operation...", |cx| {
let span = cx.span();
span.set_attribute(LEMONS_KEY.string("five"));
span.add_event("Sub span event", vec![]);
});
});
loop {
tracing::info!("just sleeping, press ctrl-c to exit");
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
}
}
As one final help, if you use the Helm chart to deploy the collector, you will want to use a configuration that looks something like this:
values:
mode: deployment
config:
exporters:
otlp/honeycomb:
endpoint: api.honeycomb.io:443
headers:
"x-honeycomb-team": "{{{honeycomb-api-key}}}"
service:
pipelines:
traces:
receivers:
- otlp
exporters:
- otlp/honeycomb
Thanks again to #jessitron for her clutch help. :)
Related
I'm trying to use opentelemetry and open telemetry-otlp to provide observability data over OTLP to Honeycomb.
I'm using something like this as a proof of concept (extracted out into this repo if you want to run it: https://github.com/timfpark/honeycomb-rust-poc)
fn init_tracer(metadata: &MetadataMap) -> Result<sdktrace::Tracer, TraceError> {
let opentelemetry_endpoint =
env::var("OTEL_ENDPOINT").unwrap_or_else(|_| "https://api.honeycomb.io".to_owned());
let opentelemetry_endpoint =
Url::parse(&opentelemetry_endpoint).expect("OTEL_ENDPOINT is not a valid url");
opentelemetry_otlp::new_pipeline()
.tracing()
.with_exporter(
opentelemetry_otlp::new_exporter()
.tonic()
.with_endpoint(opentelemetry_endpoint.as_str())
.with_metadata(metadata.clone())
.with_tls_config(
ClientTlsConfig::new().domain_name(
opentelemetry_endpoint
.host_str()
.expect("OTEL_ENDPOINTshould have a valid host"),
),
),
)
.install_batch(opentelemetry::runtime::Tokio)
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let mut metadata = MetadataMap::with_capacity(2);
metadata.insert(
"x-honeycomb-team",
"...honeycomb api key...".parse().unwrap(),
);
metadata.insert("x-honeycomb-dataset", "my-api".parse().unwrap());
let tracer = init_tracer(&metadata).expect("failed to instantiate opentelemetry tracing");
tracing_subscriber::registry()
.with(tracing_subscriber::EnvFilter::from_default_env())
.with(tracing_opentelemetry::layer().with_tracer(tracer))
.with(tracing_subscriber::fmt::layer())
.try_init()
.expect("failed to register tracer with registry");
let tracer = global::tracer("ex.com/basic");
but I am getting:
2022-11-02T17:01:01.088429Z DEBUG hyper::client::connect::http: connecting to 52.5.162.226:443
2022-11-02T17:01:01.170767Z DEBUG hyper::client::connect::http: connected to 52.5.162.226:443
2022-11-02T17:01:01.171870Z DEBUG rustls::client::hs: No cached session for DnsName(DnsName(DnsName("api.honeycomb.io")))
2022-11-02T17:01:01.172555Z DEBUG rustls::client::hs: Not resuming any session
2022-11-02T17:01:01.269218Z DEBUG rustls::client::hs: ALPN protocol is Some(b"h2")
2022-11-02T17:01:01.269398Z DEBUG rustls::client::hs: Using ciphersuite TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
2022-11-02T17:01:01.269504Z DEBUG rustls::client::tls12::server_hello: Server supports tickets
2022-11-02T17:01:01.269766Z DEBUG rustls::client::tls12: ECDHE curve is ECParameters { curve_type: NamedCurve, named_group: secp256r1 }
2022-11-02T17:01:01.269843Z DEBUG rustls::client::tls12: Server DNS name is DnsName(DnsName(DnsName("api.honeycomb.io")))
2022-11-02T17:01:01.271123Z WARN rustls::conn: Sending fatal alert BadCertificate
2022-11-02T17:01:01.271861Z DEBUG tonic::transport::service::reconnect: reconnect::poll_ready: hyper::Error(Connect, Custom { kind: InvalidData, error: InvalidCertificateData("invalid peer certificate: UnknownIssuer") })
2022-11-02T17:01:01.271967Z DEBUG tower::buffer::worker: service.ready=true processing request
2022-11-02T17:01:01.272169Z DEBUG tonic::transport::service::reconnect: error: error trying to connect: invalid peer certificate contents: invalid peer certificate: UnknownIssuer
OpenTelemetry trace error occurred. Exporter otlp encountered the following error(s): the grpc server returns error (The service is currently unavailable): , detailed error message: error trying to connect: invalid peer certificate contents: invalid peer certificate: UnknownIssuer
which seems to indicate that something about my TLS setup is not correct... Does anyone have a snippet of opentelemetry code in Rust that works with Honeycomb?
The problem is: you need to give ClientTlsConfig a root certificate, that the target site (api.honeycomb.io) chains back to.
I found a suitable root cert installed in my container, and then made the program load it in.
Here's the code:
let pem = tokio::fs::read("/etc/ssl/certs/Starfield_Services_Root_Certificate_Authority_-_G2.pem").await.expect("read the cert file");
let cert = Certificate::from_pem(pem);
let mut metadata = MetadataMap::with_capacity(1);
metadata.insert("x-honeycomb-team", honeycomb_api_key.parse().unwrap());
let opentelemetry_endpoint =
env::var("OTEL_ENDPOINT").unwrap_or_else(|_| "https://api.honeycomb.io".to_owned());
let opentelemetry_endpoint =
Url::parse(&opentelemetry_endpoint).expect("OTEL_ENDPOINT is not a valid url");
opentelemetry_otlp::new_pipeline()
.tracing()
.with_exporter(
opentelemetry_otlp::new_exporter()
.tonic()
.with_endpoint(opentelemetry_endpoint.as_str())
.with_metadata(metadata.clone())
.with_tls_config(
ClientTlsConfig::new().ca_certificate(cert)
),
)
.install_batch(opentelemetry::runtime::Tokio)
}
The first two lines are new; they load a root certificate from the filesystem.
Then use that to configure the ClientTlsConfig.
I chose that root certificate file based on the output of the certificate details:
openssl s_client -connect api.honeycomb.io:443 -servername localhost
The last entry in the certificate chain resembled the filename. It included: /CN=Starfield Services Root Certificate Authority - G2
For reference, a complete solution to Rust/Honeycomb/Tracing/OpenTelemetry implementation is now available here: https://github.com/Dhghomon/rust_opentelemetry_honeycomb.
It does not need to use certs.
UPDATE: Expanding a little since this is the accepted answer now.
The core tracer initialization code is as follows:
fn init_tracer() -> Result<sdktrace::Tracer, TraceError> {
opentelemetry_otlp::new_pipeline()
.tracing()
.with_exporter(
opentelemetry_otlp::new_exporter()
.http()
.with_endpoint("https://api.honeycomb.io/v1/traces")
.with_http_client(reqwest::Client::default())
.with_headers(HashMap::from([
("x-honeycomb-dataset".into(), DATASET_NAME.into()),
("x-honeycomb-team".into(), API_KEY.into()),
]))
.with_timeout(std::time::Duration::from_secs(2)),
) // Replace with runtime::Tokio if using async main
.install_batch(opentelemetry::runtime::TokioCurrentThread)
}
The subscriber is configured like this:
let tracer = init_tracer().unwrap();
let telemetry = tracing_opentelemetry::layer().with_tracer(tracer);
let subscriber = tracing_subscriber::Registry::default().with(telemetry);
tracing::subscriber::set_global_default(subscriber).unwrap();
In addition to #jessitron's answer I was also able to get Honeycomb working with a Rust service using a separate OpenTelemetry collector service. The instructions for how to configure that collector within a Kubernetes cluster are here, also by #jessitron.
With that, then you can use a simpler configuration for reporting traces to the collector, Ala:
use opentelemetry::sdk::trace as sdktrace;
use opentelemetry::trace::{TraceContextExt, TraceError, Tracer};
use opentelemetry::{global, Key};
use opentelemetry_otlp::WithExportConfig;
use std::env;
use tonic::metadata::MetadataMap;
use tonic::transport::ClientTlsConfig;
use tracing_subscriber::prelude::*;
use url::Url;
fn init_tracer() -> Result<sdktrace::Tracer, TraceError> {
// Configure OTEL_ENDPOINT to point at your collector
let opentelemetry_endpoint =
env::var("OTEL_ENDPOINT").unwrap_or_else(|_| "http://localhost:4317".to_owned());
let opentelemetry_endpoint =
Url::parse(&opentelemetry_endpoint).expect("OTEL_ENDPOINT is not a valid url");
opentelemetry_otlp::new_pipeline()
.tracing()
.with_exporter(
opentelemetry_otlp::new_exporter()
.tonic()
.with_endpoint(opentelemetry_endpoint.as_str()),
)
.install_batch(opentelemetry::runtime::Tokio)
}
#[tokio::main]
async fn main() {
let tracer = init_tracer().expect("failed to instantiate opentelemetry tracing");
tracing_subscriber::registry()
.with(tracing_subscriber::EnvFilter::from_default_env())
.with(tracing_opentelemetry::layer().with_tracer(tracer))
.with(tracing_subscriber::fmt::layer())
.try_init()
.expect("failed to register tracer with registry");
const LEMONS_KEY: Key = Key::from_static_str("lemons");
const ANOTHER_KEY: Key = Key::from_static_str("ex.com/another");
let tracer = global::tracer("ex.com/basic");
tracer.in_span("operation", |cx| {
let span = cx.span();
span.add_event(
"Nice operation!".to_string(),
vec![Key::new("bogons").i64(100)],
);
span.set_attribute(ANOTHER_KEY.string("yes"));
tracer.in_span("Sub operation...", |cx| {
let span = cx.span();
span.set_attribute(LEMONS_KEY.string("five"));
span.add_event("Sub span event", vec![]);
});
});
loop {
tracing::info!("just sleeping, press ctrl-c to exit");
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
}
}
As one final help, if you use the Helm chart to deploy the collector, you will want to use a configuration that looks something like this:
values:
mode: deployment
config:
exporters:
otlp/honeycomb:
endpoint: api.honeycomb.io:443
headers:
"x-honeycomb-team": "{{{honeycomb-api-key}}}"
service:
pipelines:
traces:
receivers:
- otlp
exporters:
- otlp/honeycomb
Thanks again to #jessitron for her clutch help. :)
I am playing around with RSA encryption and I have just been using the standard PCKS1 padding which works fine, but I would like to use the more advanced OAEP or PSS Padding schemes. But for some reason when I switch the constant from PCKS1 to PKCS1_OAEP, it compiles but I get a run-time error. Which implies to me that the functionality is there but I am doing something wrong.
Here's my code
use openssl::{rsa::{Rsa, Padding}, symm::Cipher};
use bincode;
fn main() {
let rsa = Rsa::generate(512).unwrap();
let password = "password";
let source = "hello paul".to_string();
let data = bincode::serialize(&source).unwrap();
let private = rsa.private_key_to_pem_passphrase(Cipher::aes_128_cbc(), password.as_bytes()).unwrap();
//encrypt
let private_key = Rsa::private_key_from_pem_passphrase(&private, password.as_bytes()).unwrap();
let mut enc_data = vec![0; private_key.size() as usize];
match private_key.private_encrypt(&data, &mut enc_data, Padding::PKCS1_OAEP) {
Ok(_) => {},
Err(e) => {
println!("{e}");
return;
}
}
}
and my Cargo.toml dependencies
[dependencies]
openssl-sys = "0.9.79"
openssl = "0.10.44"
chrono = "0.4"
bincode = "1.0"
serde = { version = "1.0", features = ["derive"] }
and here is the error I am getting.
error:04066076:rsa routines:rsa_ossl_private_encrypt:unknown padding type:crypto/rsa/rsa_ossl.c:273:
So I am getting this from this resource(https://docs.rs/openssl/latest/openssl/rsa/struct.Padding.html#associatedconstant.PKCS1_OAEP)
I ahve also tried it with PKCS1_PSS and get the same error.
Does anyone know whats up, maybe they never actually finished OAEP or PSS, or is there something wrong on my end? Maybe I need to use a different Cipher than aes_128_cbc? Thanks for reading any help is appreciated.
I want to execute this Move script, e.g. at sources/top_up.move:
script {
use std::signer;
use aptos_framework::aptos_account;
use aptos_framework::aptos_coin;
use aptos_framework::coin;
fun main(src: &signer, dest: address, desired_balance: u64) {
let src_addr = signer::address_of(src);
let balance = coin::balance<aptos_coin::AptosCoin>(src_addr);
if (balance < desired_balance) {
aptos_account::transfer(src, dest, desired_balance - balance);
};
}
}
This is calling functions on the aptos_coin.move module, which is deployed on chain. What it does isn't so important for this question, but in short, it checks that the balance of the destination account is less than desired_balance, and if so, tops it up to desired_balance.
I can execute this Move script via the CLI easily like this:
aptos move compile
aptos move run-script --compiled-script-path build/MyModule/bytecode_scripts/main.mv
Or even just this:
aptos move run-script --script-path sources/top_up.move
What I want to know is whether I can do this using the Rust SDK?
First, you need to compile the script, as you did above. Imagine you have a project layout like this:
src/
main.rs
move/
Move.toml
sources/
top_up.mv
You would want to go in to move/ and run aptos move compile, like you said above. From there, you can depend on the compiled script in your code (see below).
With that complete, here is a minimal code example demonstrating how to execute a Move script using the Rust SDK.
Cargo.toml:
[package]
name = "my-example"
version = "0.1.0"
edition = "2021"
[dependencies]
anyhow = "1"
aptos-sdk = { git = "https://github.com/aptos-labs/aptos-core", branch = "mainnet" }
src/main.rs:
use aptos_sdk::crypto::ed25519::Ed25519PublicKey;
use aptos_sdk::types::transaction::authenticator::AuthenticationKey;
use aptos_sdk::{
rest_client::Client,
transaction_builder::TransactionFactory,
types::{
account_address::AccountAddress,
chain_id::ChainId,
transaction::{Script, SignedTransaction, TransactionArgument},
LocalAccount,
},
};
static SCRIPT: &[u8] =
include_bytes!("../../move/build/MyModule/bytecode_scripts/main.mv");
fn main() -> anyhow::Result<()> {
// Prior to the follow code we assume you've already acquired the necessary
// information such as chain_id, the private key of the account submitting
// the transaction, arguments for the Move script, etc.
// Build a transaction factory.
let txn_factory = TransactionFactory::new(chain_id);
// Build a local representation of an account.
let account = LocalAccount::new(
AuthenticationKey::ed25519(&Ed25519PublicKey::from(&private_key)).derived_address()
private_key,
0,
);
// Build an API client.
let client = Client::new("https://fullnode.mainnet.aptoslabs.com");
// Create a builder where the payload is the script.
let txn_builder = transaction_factory.script(Script::new(
SCRIPT.to_vec(),
// type args
vec![],
// args
vec![
TransactionArgument::Address(dest_address),
TransactionArgument::U64(desired_balance),
],
)));
// Build the transaction request and sign it.
let signed_txn = account.sign_with_transaction_builder(
txn_builder
);
// Submit the transaction.
client.submit_and_wait_bcs(&signed_transaction).await?;
}
Using this particular proto file given below
https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/metrics/v1/metrics.proto
I have created a grpc server in Rust and implemented the export method like this :
impl MetricsService for MyMetrics {
async fn export(
&self,
request: Request<ExportMetricsServiceRequest>,
) -> Result<Response<ExportMetricsServiceResponse>, Status> {
println!("Got a request from {:?}", request.remote_addr());
println!("request data ==> {:?}", request);
let reply = metrics::ExportMetricsServiceResponse {};
Ok(Response::new(reply))
}
}
To test this code,
I created a grpc client in node.js with same proto file and called the export method - which worked as expected.
Then, I used otlpmetricsexporter in node.js (instead of making an explicit call to export method), in this case, I am not receiving the request on Rust grpc server.
Getting this error :
{"stack":"Error: 12 UNIMPLEMENTED: \n at Object.callErrorFromStatus (/home/acq053/work/src/github.com/middleware-labs/agent-node-metrics/node_modules/#grpc/grpc-js/build/src/call.js:31:26)\n at Object.onReceiveStatus (/home/acq053/work/src/github.com/middleware-labs/agent-node-metrics/node_modules/#grpc/grpc-js/build/src/client.js:189:52)\n at Object.onReceiveStatus (/home/acq053/work/src/github.com/middleware-labs/agent-node-metrics/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:365:141)\n at Object.onReceiveStatus (/home/acq053/work/src/github.com/middleware-labs/agent-node-metrics/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:328:181)\n at /home/acq053/work/src/github.com/middleware-labs/agent-node-metrics/node_modules/#grpc/grpc-js/build/src/call-stream.js:187:78\n at processTicksAndRejections (internal/process/task_queues.js:75:11)","message":"12 UNIMPLEMENTED: ","code":"12","metadata":"[object Object]","name":"Error"}
My Rust Grpc server is running # [::1]:50057
so, I used OTEL_EXPORTER_OTLP_ENDPOINT=[::1]:50057 env while running my node.js exporter
What could have gone wrong ?!
_
https://github.com/Bhogayata-Keval/rust-grpc-demo.git
I compile your code with 2 changes.
remove tonic_build::compile_protos("proto/helloworld.proto")in buld.rs
update let addr = "[::1]:50057".parse().unwrap(); by let addr = "127.0.0.1:50057".parse().unwrap(); in metric.rs.
I ran target/debug/metrics. and I used, grpcurl like client. With the command
./grpcurl -plaintext -import-path ./proto -proto metrics.proto 127.0.0.1:50057 metrics.MetricsService/Export
And result was
{
}
It is the expected result?
Regards
I'm struggling with actix-web 2.0 framework of rust. I want my rust server to serve the my index.html file but most of the help available is of older versions and hence a lot has changed in newer version. I tried following code but it's not working for actix-web 2.0. Please suggest some working solution in actix-web 2.0.
use actix_files::NamedFile;
use actix_web::{HttpRequest, Result};
async fn index(req: HttpRequest) -> Result<NamedFile> {
Ok(NamedFile::open(path_to_file)?)
}
By trying the code given in the answer I could serve a single html file but it is unable to load the linked JavaScript file. I have tried the following approach suggested in https://actix.rs/docs/static-files/ to serve the directory.
#[actix_rt::main]
async fn main() -> std::io::Result<()> {
dotenv::dotenv().ok();
std::env::set_var("RUST_LOG", "actix_web=debug");
let database_url = std::env::var("DATABASE_URL").expect("set DATABASE_URL");
// create db connection pool
let manager = ConnectionManager::<PgConnection>::new(database_url);
let pool: Pool = r2d2::Pool::builder()
.build(manager)
.expect("Failed to create pool.");
//Serving the Registration and sign-in page
async fn index(_req: HttpRequest) -> Result<NamedFile> {
let path: PathBuf = "./static/index.html".parse().unwrap();
Ok(NamedFile::open(path)?)
}
// Start http server
HttpServer::new(move || {
App::new()
.data(pool.clone())
.service(fs::Files::new("/static", ".").show_files_listing())
.route("/", web::get().to(index))
.route("/users", web::get().to(handler::get_users))
.route("/users/{id}", web::get().to(handler::get_user_by_id))
.route("/users", web::post().to(handler::add_user))
.route("/users/{id}", web::delete().to(handler::delete_user))
})
.bind("127.0.0.1:8080")?
.run()
.await
}
Above is my main method. In browser console I'm still getting the error that unable to load the Registration.js resource. Following is my folder structure:
-migrations
-src
-main.rs
-handler.rs
-errors.rs
-models.rs
-schema.rs
-static
-index.html
-Registration.js
-target
Cargo.toml
.env
Cargo.lock
diesel.toml
I have already built the backend with DB integration and it is working fine as checked by curl commands and now I'm trying to build front end and as first step trying to serve static files.
I am not sure what problem you're facing since the description is not detailed, however, I ran the default example and it is working.
use actix_files::NamedFile;
use actix_web::{HttpRequest, Result};
use std::path::PathBuf;
/// https://actix.rs/docs/static-files/
async fn index(_req: HttpRequest) -> Result<NamedFile> {
let path: PathBuf = "./files/index.html".parse().unwrap();
Ok(NamedFile::open(path)?)
}
#[actix_rt::main]
async fn main() -> std::io::Result<()> {
use actix_web::{web, App, HttpServer};
HttpServer::new(|| App::new().route("/", web::get().to(index)))
.bind("127.0.0.1:8088")?
.run()
.await
}
project structure
- files/index.html
- src/index.rs
- cargo.toml
dependencies
[dependencies]
actix-web = "2.0.0"
actix-files = "0.2.2"
actix-rt = "1.1.1"
If you want to really embed resources into executable you can use https://crates.io/crates/actix-web-static-files.
It uses build.rs to prepare resources and later you can just have single executable without dependencies.
As extra it support npm based builds out of the box.
Basically I am an author of this crate. There are versions both for 2.x and 3.x versions of actix-web.
Please see the below code that it works for the whole subdirectories:
main.rs
use actix_files as fs;
use actix_web::{App, HttpServer};
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new().service(
fs::Files::new("/", "./public")
.show_files_listing()
.index_file("index.html")
.use_last_modified(true),
)
})
.bind("0.0.0.0:8000")?
.run()
.await
}
Cargo.toml
[package]
name = "actixminimal"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
actix-web = "4"
actix-files = "0.6.2"
It perform file serving very fast ever I'd tried.
In this case, you can create a "public" or you can use your own "static" folder (then change your code related to the folder name) or you can export the static files from your web framework. I use GatsbyJS and I use deploy parameter to "public" folder.
I share it on my github.
https://github.com/openmymai/actixminimal.git