Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarks against other clients/drivers #8

Open
kitsuniru opened this issue Jan 13, 2024 · 1 comment
Open

Benchmarks against other clients/drivers #8

kitsuniru opened this issue Jan 13, 2024 · 1 comment

Comments

@kitsuniru
Copy link
Contributor

Interesting, how it will act against tokio_postgres or pgx

@karlseguin
Copy link
Owner

I think it's hard to do because there are many use cases, and different use cases might have different hotspots. If you're doing a large insert, binding performance might be the concern. If you're reading a lot of data, parsing and network might be. If we're reading, should we clone the results and own the bytes, or not?

Also, the query execution (within postgresql) and network transfer (even over localhost) often accounts for the majority of the time any query takes. It's hard to measure something when it's only a small percentage of the overall cost.

For what it's worth, the following takes roughly the same time:

package main

import (
	"context"
	"fmt"
	"time"

	"github.com/jackc/pgx/v5"
)

func main() {
	conn, err := pgx.Connect(context.Background(), "postgres://localhost:5432/postgres")
	if err != nil {
		panic(err)
	}
	defer conn.Close(context.Background())

	start := time.Now()
	for i := 0; i < 100; i += 1 {
		rows, err := conn.Query(context.Background(), "select generate_series(1,100000) as id, md5(random()::text)")
		if err != nil {
			panic(err)
		}

		sum := 0
		l := 0
		for rows.Next() {
			var id int
			var hash string
			if err := rows.Scan(&id, &hash); err != nil {
				panic(err)
			}
			sum += id
			l += len(hash)
		}
		rows.Close()
		if sum != 5000050000 || l != 3200000 {
			panic("fail")
		}
	}
	fmt.Println(time.Now().Sub(start))
}
const std = @import("std");
const pg = @import("pg");
const Allocator = std.mem.Allocator;

pub fn main() !void {
	var gpa = std.heap.GeneralPurposeAllocator(.{}){};
	const allocator = gpa.allocator();

	var conn = try pg.Conn.open(allocator, .{});
	try conn.auth(.{});

	const start = std.time.milliTimestamp();
	for (0..100) |_| {
		var result = try conn.query("select generate_series(1,100000) as id, md5(random()::text)", .{});
		defer result.deinit();

		var sum: usize = 0;
		var l: usize = 0;
		while (try result.next()) |row| {
			const id = row.get(i32, 0);
			const hash = row.get([]u8, 1);

			sum += @intCast(id);
			l += hash.len;
		}
		if (sum != 5000050000 or l != 3200000) {
			@panic("fail");
		}
	}
	std.debug.print("{d}\n", .{std.time.milliTimestamp() - start});
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants