I'm trying to embed Diesel migrations into my rocket applications but whenever I attempt to select the tables they don't exist. The "Migrations successfully completed" message gets printed and rocket starts normally but the migrations don't seem to be applying, I'm really not sure what's happening here.
I tested the PgPool connection when I manually did migrations and it works fine.
my migrations directory:
00000000000000_diesel_initial_setup
down.sql
up.sql
2022-11-13-225713_users
down.sql
up.sql
my dependencies:
diesel = { version = "2.0.2", features = ["postgres", "nightly-error-messages", "r2d2"] }
diesel_migrations = "2.0.0"
rocket = { version = "0.5.0-rc.2", features = ["json"] }
pub const MIGRATIONS: EmbeddedMigrations = embed_migrations!("migrations");
fn run_migrations(connection: &mut impl MigrationHarness<diesel::pg::Pg>) {
match connection.run_pending_migrations(MIGRATIONS) {
Ok(_) => {
println!("Migrations successfully completed");
},
Err(e) => {
panic!("error running pending migrations {}", e)
},
};
}
async fn run_rocket_migrations(rocket: Rocket<Build>) -> Rocket<Build> {
let mut pool = rocket.state::<PgPool>().expect("managed database").get().unwrap();
run_migrations(&mut pool);
rocket
}
#[launch]
fn rocket() -> _ {
rocket::build()
.manage(establish_connection())
.attach(AdHoc::on_ignite("run migrations", run_rocket_migrations))
}
Related
I'm using teloxide a bot that coordinates information between players of a game. As a simplified example, I want player A to mark task_1 as complete, and then I want player B to be able to ask the bot if task_1 has been completed or not and for the bot to respond appropriately.
So far I've tried the in-memory storage and the Redis storage, and both seem to be storing independent states. For example Player A can mark a task as complete, but when player B asks about the state of that task, it remains incomplete.
I haven't got sqlite setup on my machine so haven't tried it yet, but I wouldn't imagine it's setup differently to redis(?).
I could theoretically write the bot's state to a file on disk and then update the internal state every time, but that seems very round about for something I'm hoping teloxide already has built in?
Is there any way I can have multiple users mutating the same internal state of the bot?
Here's the code:
Here's the code I'm using. This is lightly adjusted from the db_remember.rs example so that I can get a number, reset the number, and set a number to something new using \set XXX
src/main.rs:
use teloxide::{
dispatching::dialogue::{
serializer::Bincode,
ErasedStorage, RedisStorage, Storage,
},
prelude::*,
utils::command::BotCommands,
};
use serde::{self, Deserialize, Serialize};
type MyDialogue = Dialogue<State, ErasedStorage<State>>;
type MyStorage = std::sync::Arc<ErasedStorage<State>>;
type HandlerResult = Result<(), Box<dyn std::error::Error + Send + Sync>>;
#[derive(Clone, Default, Serialize, Deserialize)]
pub enum State {
#[default]
Start,
GotNumber(i32),
}
#[derive(Clone, BotCommands)]
#[command(rename_rule = "lowercase", description = "These commands are supported:")]
pub enum Command {
#[command(description = "set your number.")]
Set { num: i32 },
#[command(description = "get your number.")]
Get,
#[command(description = "reset your number.")]
Reset,
}
#[tokio::main]
async fn main() {
pretty_env_logger::init();
log::info!("Starting DB remember bot...");
let bot = Bot::from_env();
let storage: MyStorage = RedisStorage::open("redis://127.0.0.1:6379", Bincode).await.unwrap().erase();
let handler = Update::filter_message()
.enter_dialogue::<Message, ErasedStorage<State>, State>()
.branch(dptree::case![State::Start].endpoint(start))
.branch(
dptree::case![State::GotNumber(n)]
.branch(dptree::entry().filter_command::<Command>().endpoint(got_number))
.branch(dptree::endpoint(invalid_command)),
);
Dispatcher::builder(bot, handler)
.dependencies(dptree::deps![storage])
.enable_ctrlc_handler()
.build()
.dispatch()
.await;
}
async fn start(bot: Bot, dialogue: MyDialogue, msg: Message) -> HandlerResult {
match msg.text().map(|text| text.parse::<i32>()) {
Some(Ok(n)) => {
log::info!("[{:?}] Number set to {n}", msg.chat.username());
dialogue.update(State::GotNumber(n)).await?;
bot.send_message(
msg.chat.id,
format!("Remembered number {n}. Now use /get or /reset."),
)
.await?;
}
_ => {
bot.send_message(msg.chat.id, "Please, send me a number.").await?;
}
}
Ok(())
}
async fn got_number(
bot: Bot,
dialogue: MyDialogue,
num: i32, // Available from `State::GotNumber`.
msg: Message,
cmd: Command,
) -> HandlerResult {
let old_num = num;
match cmd {
Command::Set { num } => {
log::info!("[{:?}] Number changed from {} to {}", msg.chat.username(), old_num, num);
dialogue.update(State::GotNumber(num)).await?;
bot.send_message(msg.chat.id, format!("Set your number from {} to {}", old_num, num)).await?;
}
Command::Get => {
bot.send_message(msg.chat.id, format!("Here is your number: {num}.")).await?;
}
Command::Reset => {
dialogue.reset().await?;
bot.send_message(msg.chat.id, "Number reset.").await?;
}
}
Ok(())
}
async fn invalid_command(bot: Bot, msg: Message) -> HandlerResult {
bot.send_message(msg.chat.id, "Please, send /get or /reset.").await?;
Ok(())
}
Cargo.toml:
[package]
name = "tmp"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
teloxide = { version = "0.12", features = ["macros", "redis-storage", "bincode-serializer"] }
log = "0.4"
pretty_env_logger = "0.4"
tokio = { version = "1.8", features = ["rt-multi-thread", "macros"] }
serde_yaml = "0.9.17"
serde = "1.0.152"
This was answered by WaffleLapkin on the Teloxide github discussion page:
The dialogue storages in teloxide store state per chat, if you need something else, you'll need to work with databases directly.
So I ended up using a Redis database to store the game state, and regular in-memory for storing teloxide's state. Something like:
// Cargo.toml: redis = { version = "*" }
extern crate redis;
use redis::{Commands, Connection};
/// Establish a connection to the redis database,
/// used for sharing Game state across
/// different instances of the bot
fn connect_to_redis() -> Result<Connection, Box<dyn std::error::Error + Sync + Send>> {
let client = redis::Client::open("redis://127.0.0.1/")?;
let conn = client.get_connection()?;
// Cargo.toml: log = "0.4"
log::info!("Redis connection established");
Ok(conn)
}
But then you also can't directly store a rust struct into redis, so I used serde_yaml to serialise the game state:
fn set_game_state(
conn: &mut Connection,
game: Game,
) -> Result<(), Box<dyn Error + Send + Sync>> {
let game_key = "game_state";
conn.set::<&str, String, ()>(game_key, serde_yaml::to_string(&game)?)?;
Ok(())
}
And then stored it all as one long string in a single key in the DB.
To retrieve the game state is a similar operation, but I also added in some logic to create the game from an existing "template" YAML file game.yaml if the key doesn't exist:
fn get_game_state(conn: &mut Connection) -> Result<Game, Box<dyn Error + Send + Sync>> {
let game_state_key = "game_state";
let game_state: Game = if conn.exists(game_state_key)? {
log::debug!("Getting game state");
conn.get::<&str, String>(game_state_key)
.and_then(|game_state| Ok(serde_yaml::from_str::<Game>(&game_state).unwrap().into()))?
} else {
log::debug!("Creating game state");
let s = Game::new("challenges.yaml")?;
let serialised = serde_yaml::to_string(&s)?;
conn.set::<&str, String, ()>(game_state_key, serialised)?;
s
};
Ok(game_state)
}
A bit overkill, but it works.
I'm want to extract my AppState struct from HttpRequest in my handler function.
It works when I'm calling that handler through my main app instance, but doesn't work inside integration tests.
Handler function:
pub async fn handle(req: HttpRequest, user: Json<NewUser>) -> Result<HttpResponse, ShopError> {
let state = req.app_data::<Data<AppState>>().expect("app_data is empty!");
Ok(HttpResponse::Ok().finish())
}
main.rs:
#[actix_web::main]
pub async fn main() -> std::io::Result<()> {
dotenv().ok();
let chat_server = Lobby::default().start();
let state = AppState {
static_data: String::from("my_data")
};
HttpServer::new(move || {
App::new()
.app_data(Data::new(state.clone()))
.service(web::scope("/").configure(routes::router))
.app_data(Data::new(chat_server.clone()))
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
Test:
#[actix_web::test]
async fn sanity_test() {
let app = test::init_service(App::new().route("/", web::post().to(handle))).await;
let user = NewUser {
username: String::from("legit_user"),
password: String::from("123"),
};
let state = AppState {
static_data: String::from("my_data")
};
let req = test::TestRequest::post()
.app_data(Data::new(state))
.set_json(&user)
.uri("/")
.to_request();
let resp = test::call_service(&app, req).await;
assert!(resp.status().is_success());
}
Test output:
running 1 test
thread 'routes::login::tests::sanity_test' panicked at 'app_data is empty!', src/routes/login.rs:35:50
For some reason it is always None. I've tried using Data<AppState> extractor but then whole handler is not even called, again, only when testing, otherwise everything works.
Cargo.toml:
[dependencies]
diesel = { version = "1.4.4", features = ["postgres", "r2d2", "chrono"] }
diesel_migrations = "1.4.0"
chrono = { version = "0.4.19", features = ["serde"] }
dotenv = "0.15.0"
actix = "0.13.0"
actix-web = "4.1.0"
actix-web-actors = "4.1.0"
bcrypt = "0.13.0"
uuid = { version = "1.1.2", features = ["serde", "v4"] }
serde_json = "1.0.82"
serde = { version = "1.0.139", features = ["derive"] }
validator = { version = "0.16.0", features = ["derive"] }
derive_more = "0.99.17"
r2d2 = "0.8.10"
lazy_static = "1.4.0"
jsonwebtoken = "8.1.1"
I'm aware of app_data is always None in request handler thread, and it does not solve my problem since for me everything works except when testing.
From what I see in your integration test, app_data is not configured for the app instance passed to test::init_service, which is why the handler panics. When the app is configured with app_data as you have done it in main, app_data becomes available for the handler.
With the code below, the handler can access AppData in the integration test. The main difference from the original post is that the app in the integration test is configured with app_data, not the request.
use actix_web::{
HttpServer,
HttpRequest,
HttpResponse,
App,
web::{Data, Json},
post,
};
use serde::{Deserialize, Serialize};
#[derive(Deserialize, Serialize)]
pub struct NewUser{
username: String,
password: String,
}
#[derive(Clone)]
struct AppState{
static_data: String,
}
#[post("/")]
pub async fn handle(
req: HttpRequest,
_user: Json<NewUser>
) -> HttpResponse {
let state = req
.app_data::<Data<AppState>>()
.expect("app_data is empty!");
println!("Static data: {}", state.static_data);
HttpResponse::Ok().finish()
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let state = AppState{
static_data: "Sparta".to_string(),
};
HttpServer::new(move || {
App::new()
.app_data(Data::new(state.clone()))
.service(handle)
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
#[cfg(test)]
mod tests {
use super::*;
use actix_web::{
test,
};
#[actix_web::test]
async fn itest() {
// Set up app with test::init_service
let state = AppState {
static_data: "Sparta".to_string(),
};
let app = test::init_service(
App::new()
.app_data(Data::new(state.clone()))
.service(handle)
).await;
// Prepare request
let sample_user = NewUser{
username: "legit_user".to_string(),
password: "nosecret123".to_string(),
};
let req = test::TestRequest::post()
.set_json(&sample_user)
.uri("/")
.to_request();
// Send request and await response
let resp = test::call_service(&app, req).await;
assert!(resp.status().is_success())
}
}
I am trying to use monoio with s2n_quic to implement quic for this fast runtime. When run, it prints out
StartError:
there is no reactor running, must be called from the context of a Tokio 1.x runtime
Here is my code
use monoio;
use s2n_quic::{provider::connection_id, Server};
use std::error::Error;
fn main() -> Result<(), Box<dyn Error>> {
let mut rt = monoio::RuntimeBuilder::new()
.with_entries(1024)
.enable_timer()
.build()
.unwrap();
rt.block_on(async {
println!("it works2!");
let server = Server::builder();
match server
.with_connection_id(connection_id::Default::default()).unwrap()
.with_io("127.0.0.1:8080").unwrap()
.start()
{
Ok(mut serv) => {
while let Some(mut connection) = serv.accept().await {
monoio::spawn(async move {
while let Ok(Some(mut stream)) =
connection.accept_bidirectional_stream().await
{
monoio::spawn(async move {
while let Ok(Some(data)) = stream.receive().await {
stream.send(data).await.expect("stream should be open");
}
});
}
});
}
}
Err(e) => {
println!("{}", e);
}
}
});
Ok(())
}
To run it you will need 5.6 linux kernel.
I'm trying to use Actix-SQLx-Juniper in my Rust project. I've followed and combined any tutorial that I've found, it successfully compiled and run. But when I try to post a query, I got this error in terminal:
thread 'actix-web' panicked at 'Tried to resolve async field users on type Some("Query") with a sync resolver', src/graphql.rs:15:1
and my GraphiQL shows "Thread pool is gone"
this is the src/graphql.rs:
#[derive(Clone, Debug)]
pub struct Context {
pub pool: PgPool,
}
impl juniper::Context for Context {}
pub struct Query;
#[graphql_object(Context = Context)] // this is line 15
impl Query {
fn apiVersion() -> &str {
"1.0"
}
#[graphql(description = "Hello")]
pub async fn users(ctx: &Context) -> FieldResult<Vec<User>> {
println!("{:#?}", ctx);
sqlx::query_as::<_, User>("SELECT * FROM users")
.fetch_all(&ctx.pool)
.await
.map_err(|e| e.into())
}
}
pub type Schema = RootNode<'static, Query, EmptyMutation<Context>, EmptySubscription<Context>>;
pub fn create_schema() -> Schema {
Schema::new(Query {}, EmptyMutation::new(), EmptySubscription::new())
}
But after I trace the error, it panicked when I tried to use execute_sync in my src/handler.rs:
pub fn graphql_handlers(config: &mut ServiceConfig) {
config
.data(create_schema())
.route("/graphql", web::get().to(graphql_playground))
.route("/graphql", web::post().to(graphql));
}
...
...
async fn graphql(
pool: web::Data<PgPool>,
schema: web::Data<Schema>,
data: web::Json<GraphQLRequest>,
) -> Result<HttpResponse, Error> {
let ctx = Context {
pool: pool.get_ref().to_owned(),
};
let res = block(move || {
let res = data.execute_sync(&schema, &ctx);
Ok::<_, serde_json::error::Error>(serde_json::to_string(&res)?)
})
.await
.map_err(Error::from)?;
Ok(HttpResponse::Ok()
.content_type("application/json")
.body(res))
}
I've tried to find the solution or boilerplate code but still couldn't find it.
Here's my main.rs:
#[actix_web::main]
async fn main() -> Result<()> {
let pool = create_pool().await.expect("Failed connecting to postgres");
HttpServer::new(move || {
App::new()
.data(pool.clone())
.wrap(Logger::default())
.configure(graphql_handlers)
})
.bind("127.0.0.1:8000")?
.run()
.await
}
Here's my dependencies:
actix-web = "3"
juniper = "0.15"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0.64"
uuid = { version = "0.8", features = [ "serde", "v4"] }
sqlx = { version = "0.4", features = [ "runtime-actix-rustls", "postgres", "uuid" ] }
I plan to have a application which uses Sqlite databases as data files.
Because different files can be opended more often, I want to cache the connections.
I'm very new to Rust; this is my first project...
My problem is: Somewhen I run out of file handles, and I cannot create new database files.
What I tried so far:
test1(), will only work, if I implement Drop for MyPool. Drop will close the connection-pool. By doing this, I'm sure the file handles gets free again.
test2(), is the async version which I would need for my project (it will be a Rocket app). Here I'm not successful at all.
If you run the code, you would have to delete all db.* files afterwards.
// Cargo.toml
// tokio = { version = "1", features = ["rt-multi-thread", "macros" ] }
// futures = "0.3"
// sqlx = { version = "0.5", features = [ "runtime-tokio-native-tls", "sqlite", "migrate" ] }
use sqlx::{migrate::MigrateDatabase, sqlite::SqlitePoolOptions, Pool, Sqlite};
use futures::executor::block_on;
use std::sync::{Arc, Mutex};
#[derive(Clone)]
struct MyPool(Pool<Sqlite>);
impl Drop for MyPool {
fn drop(&mut self) {
println!("**** drop");
block_on(
self.0.close()
);
}
}
#[tokio::main]
async fn main() {
test1().await;
//test2().await;
}
async fn test1() {
let mut pool: Vec<MyPool> = Vec::new();
for i in 1..1000 {
let db_name = format!("./db.{}.db", i);
Sqlite::create_database(&db_name)
.await.expect(format!("create {} failed", i).as_str());
let conn = SqlitePoolOptions::new()
.max_connections(5)
.connect(&db_name).await.expect(format!("connect {} failed", i).as_str());
if pool.len() == 10 {
println!("Clenup");
pool.clear();
}
println!("{}", i);
pool.push(MyPool(conn));
}
}
async fn test2() {
let pool: Arc<Mutex<Vec<MyPool>>> = Arc::new(Mutex::new(Vec::new()));
let tasks: Vec<_> = (0..1000)
.map(|i| {
let my_pool = pool.clone();
tokio::spawn(async move {
let db_name = format!("./db.{}.db", i);
Sqlite::create_database(&db_name)
.await.expect(format!("create {} failed", i).as_str());
let conn = SqlitePoolOptions::new()
.max_connections(5)
.connect(&db_name).await.expect(format!("connect {} failed", i).as_str());
{
let mut locked_pool = my_pool.lock().expect("locked");
if locked_pool.len() == 10 {
println!("Clenup");
locked_pool.clear();
}
println!("{}", i);
locked_pool.push(MyPool(conn));
}
})
}).collect();
// Wait for all tasks to complete.
futures::future::join_all(tasks).await;
}