Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add lmdb for block cache to save memory #229

Open
wants to merge 3 commits into
base: development
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
91 changes: 80 additions & 11 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,8 @@ thiserror = "1.0"
tokio = { version = "1.41.0", features = ["full"] }
tonic = "0.12.3"
lru = "0.12.5"
tempfile = "3.14.0"
rkv = { version = "0.19.0", features = ["lmdb"] }

[package.metadata.cargo-machete]
ignored = ["log4rs"]
6 changes: 6 additions & 0 deletions src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ use std::{
fs::File,
io::Write,
panic,
process,
time::{SystemTime, UNIX_EPOCH},
};

Expand Down Expand Up @@ -56,6 +57,11 @@ async fn main() -> anyhow::Result<()> {
let mut file = File::create("panic.log").unwrap();
file.write_all(format!("Panic at {}: {}", location, message).as_bytes())
.unwrap();
if cfg!(debug_assertions) {
// In debug mode, we want to see the panic message
eprintln!("Panic occurred at {}: {}", location, message);
process::exit(500);
}
}));

Cli::parse().handle_command(Shutdown::new().to_signal()).await?;
Expand Down
2 changes: 2 additions & 0 deletions src/server/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ pub struct Config {
pub max_relay_circuits_per_peer: Option<usize>,
pub block_time: u64,
pub share_window: u64,
pub block_cache_path: PathBuf,
}

impl Default for Config {
Expand All @@ -41,6 +42,7 @@ impl Default for Config {
max_relay_circuits_per_peer: None,
block_time: 20,
share_window: 2160,
block_cache_path: PathBuf::from("block_cache"),
}
}
}
Expand Down
24 changes: 21 additions & 3 deletions src/server/p2p/network.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2124,14 +2124,24 @@ where S: ShareChain
if num_connections > 20 {
continue;
}
if num_connections == 0 {
match self.dial_seed_peers().await {
Ok(_) => {},
Err(e) => {
warn!(target: LOG_TARGET, "Failed to dial seed peers: {e:?}");
},
}
continue;
}

let mut num_dialed = 0;
let store_read_lock = self.network_peer_store.read().await;
// Rather try and search good peers rather than randomly dialing
// 1000 peers will take a long time to get through
for record in store_read_lock.whitelist_peers().values() {
// Only dial seed peers if we have 0 connections
if !self.swarm.is_connected(&record.peer_id)
&& (num_connections == 0 || !store_read_lock.is_seed_peer(&record.peer_id)) {
&& !store_read_lock.is_seed_peer(&record.peer_id) {
let _unused = self.swarm.dial(record.peer_id);
num_dialed += 1;
// We can only do 30 connections
Expand Down Expand Up @@ -2430,6 +2440,13 @@ where S: ShareChain
self.query_tx.clone()
}

pub async fn dial_seed_peers(&mut self) -> Result<(), Error> {
info!(target: LOG_TARGET, squad = &self.config.squad; "Dialing seed peers...");
let seed_peers = self.parse_seed_peers().await?;
self.join_seed_peers(seed_peers).await?;
Ok(())
}

/// Starts p2p service.
/// Please note that this is a blocking call!
pub async fn start(&mut self) -> Result<(), Error> {
Expand Down Expand Up @@ -2457,8 +2474,9 @@ where S: ShareChain
}
self.subscribe_to_topics().await;

let seed_peers = self.parse_seed_peers().await?;
self.join_seed_peers(seed_peers).await?;
self.dial_seed_peers().await?;
// let seed_peers = self.parse_seed_peers().await?;
// self.join_seed_peers(seed_peers).await?;

// start initial share chain sync
// let in_progress = self.sync_in_progress.clone();
Expand Down
Loading
Loading