Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: make filters synchronous #1023

Merged
merged 5 commits into from
Oct 8, 2024
Merged

perf: make filters synchronous #1023

merged 5 commits into from
Oct 8, 2024

Conversation

XAMPPRocky
Copy link
Collaborator

This is a test to check whether making the filters synchronous makes a difference to the performance under the new raw io-uring implementation.

@Jake-Shadle
Copy link
Collaborator

fn spawn_workers(
rt: &tokio::runtime::Runtime,
ctx: PacketProcessorCtx,
pending_sends: PendingSends,
packet_processed_event: EventFdWriter,
mut shutdown_rx: crate::ShutdownRx,
shutdown_event: EventFdWriter,
) -> tokio::sync::mpsc::Sender<RecvPacket> {
let (tx, mut rx) = tokio::sync::mpsc::channel::<RecvPacket>(1);
// Spawn a task that just monitors the shutdown receiver to notify the io-uring loop to exit
rt.spawn(async move {
// The result is uninteresting, either a shutdown has been signalled, or all senders have been dropped
// which equates to the same thing
let _ = shutdown_rx.changed().await;
shutdown_event.write(1);
});
match ctx {
PacketProcessorCtx::Router {
config,
sessions,
error_sender,
worker_id,
upstream_receiver,
} => {
rt.spawn(async move {
let mut last_received_at = None;
let mut error_acc = super::error::ErrorAccumulator::new(error_sender);
while let Some(packet) = rx.recv().await {
let received_at = UtcTimestamp::now();
if let Some(last_received_at) = last_received_at {
metrics::packet_jitter(metrics::READ, &metrics::EMPTY)
.set((received_at - last_received_at).nanos());
}
last_received_at = Some(received_at);
let ds_packet = proxy::packet_router::DownstreamPacket {
contents: packet.buffer,
source: packet.source,
};
crate::components::proxy::packet_router::DownstreamReceiveWorkerConfig::process_task(
ds_packet,
worker_id,
&config,
&sessions,
&mut error_acc,
)
.await;
packet_processed_event.write(1);
}
});
rt.spawn(async move {
while let Ok(packet) = upstream_receiver.recv().await {
let packet = SendPacket {
destination: packet.destination.into(),
buffer: packet.data,
asn_info: packet.asn_info,
};
pending_sends.push(packet);
}
});
}
PacketProcessorCtx::SessionPool {
pool,
port,
mut downstream_receiver,
} => {
rt.spawn(async move {
let mut last_received_at = None;
while let Some(packet) = rx.recv().await {
pool.process_received_upstream_packet(
packet.buffer,
packet.source,
port,
&mut last_received_at,
)
.await;
packet_processed_event.write(1);
}
});
rt.spawn(async move {
while let Some(packet) = downstream_receiver.recv().await {
let packet = SendPacket {
destination: packet.destination.into(),
buffer: packet.data,
asn_info: packet.asn_info,
};
pending_sends.push(packet);
}
});
}
}
tx
}
Needs to be changed as well, otherwise it is still spawning tasks and sending messages rather than processing in the loop.

@Jake-Shadle
Copy link
Collaborator

Maybe I wasn't clear enough, the tasks should not be spawned at all since those are on the tokio runtime, I think just processing the packet inside the loop itself would maybe make things better since it won't need to context switch out for something that should take a micro/nanoseconds.

@quilkin-bot
Copy link
Collaborator

Build Succeeded 🥳

Build Id: 5090f5b3-5a9e-48b4-ac68-b5f5b5079d6a

The following development images have been built, and will exist for the next 30 days:

To build this version:

git fetch [email protected]:googleforgames/quilkin.git pull/1023/head:pr_1023 && git checkout pr_1023
cargo build

@Jake-Shadle Jake-Shadle self-requested a review October 8, 2024 12:05
@XAMPPRocky XAMPPRocky merged commit 1d6b455 into main Oct 8, 2024
12 checks passed
@Jake-Shadle Jake-Shadle deleted the ep/sync-filter branch October 8, 2024 12:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants