Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Built-in Simple Ledger Protocol (SLP) Support #30

Closed
2 tasks
bitjson opened this issue Nov 20, 2021 · 4 comments
Closed
2 tasks

Built-in Simple Ledger Protocol (SLP) Support #30

bitjson opened this issue Nov 20, 2021 · 4 comments
Labels
enhancement New feature or request

Comments

@bitjson
Copy link
Member

bitjson commented Nov 20, 2021

It would be great to add full support for Simple Ledger Protocol (SLP) queries. Chaingraph is actually very close to supporting SLP already, we likely just need a few well-designed Postgres functions. (I think our first goal should be a built-in, zero-cost implementation using only Postgres functions. Once it's clear which queries need better performance, we may also add some optional indexes or trigger-managed tables which help to precompute expensive queries.)

Building SLP Functionality in Chaingraph

Chaingraph's existing output_search_index was designed with OP_RETURN protocols like Simple Ledger Protocol (SLP) in mind. It indexes the first 26 bytes of every output (enough to cover P2PKH, the largest of typical output, and plenty to support searching for P2PK outputs) so OP_RETURN outputs can be searched by prefix (and often by contents) at no additional cost.

So it's already possible to query for SLP transactions, we just need to develop the Postgres functions (and maybe some additional indexes) to make common requests simple.

You can get a sense from this snippet for how complex features like SLP support can be added with just the PostgreSQL procedural language:

-- Bitauth functionality
CREATE FUNCTION collect_authchains (authbase_transaction_hash bytea)
RETURNS TABLE(authhead bytea, authchain_hashes bytea[], authchain_internal_ids bigint[], unspent_authhead boolean)
LANGUAGE sql STABLE
AS $$
WITH RECURSIVE cte AS (
SELECT
transaction_hash AS migration_hash,
ARRAY[transaction_hash] AS authchain_hashes,
ARRAY[transaction.internal_id] AS authchain_internal_ids,
NOT EXISTS (SELECT NULL FROM input
WHERE output.transaction_hash = input.outpoint_transaction_hash AND
output.output_index = input.outpoint_index) AS is_unspent,
false AS exceeds_max_depth,
0 AS depth
FROM output
INNER JOIN transaction ON transaction.hash = transaction_hash
WHERE transaction_hash = authbase_transaction_hash AND output_index = 0
UNION ALL
SELECT transaction.hash,
cte.authchain_hashes || transaction.hash,
cte.authchain_internal_ids || transaction.internal_id,
-- true if the output at transaction.hash, index 0 is unspent
NOT EXISTS (SELECT NULL
FROM output INNER JOIN input ON
output.transaction_hash = input.outpoint_transaction_hash AND
output.output_index = input.outpoint_index
WHERE
output.transaction_hash = transaction.hash AND
output.output_index = 0
) AS is_unspent,
cte.depth > 100000 AS exceeds_max_depth,
cte.depth + 1 AS depth
FROM cte
INNER JOIN input
ON cte.migration_hash = input.outpoint_transaction_hash AND input.outpoint_index = 0
INNER JOIN transaction
ON input.transaction_internal_id = transaction.internal_id
WHERE NOT (is_unspent OR exceeds_max_depth)
) SELECT authchain_hashes[array_upper(authchain_hashes, 1)] as authhead_transaction_hash, authchain_hashes, authchain_internal_ids, is_unspent as unspent_authhead from cte WHERE is_unspent;
$$;
COMMENT ON FUNCTION collect_authchains (bytea) IS 'Recursively scan from the provided hash, aggregating all possible authchains (across network splits) through their latest, unspent authhead. If the maximum depth of 100,000 is exceeded, the 100,000th authhead will be returned, and unspent_authhead will be set to false.';
CREATE VIEW bitauth_view AS SELECT internal_id, unspent_authhead, array_upper(authchain_internal_ids, 1) as authchain_length, hash as authbase, authhead, authchain_hashes, authchain_internal_ids
FROM transaction, collect_authchains(transaction.hash);
COMMENT ON VIEW bitauth_view IS 'A view which generates all Bitauth-related data, containing one or more rows for each transaction. Each row represents a possible authhead for a particular transaction, and the authchain column includes the chain of migration transactions required to reach that authhead. This view provides data to other Hasura-tracked views.';
CREATE VIEW authchain_view AS SELECT internal_id as transaction_internal_id, authhead as authhead_transaction_hash, authchain_length, unspent_authhead from bitauth_view;
COMMENT ON VIEW authchain_view IS 'A view which contains one row per possible authhead per transaction.';
CREATE VIEW authchain_migrations_view AS
SELECT v.internal_id as authbase_internal_id, a.migration_element_number - 1 AS migration_index, a.migration_transaction_internal_id
FROM bitauth_view AS v
LEFT JOIN LATERAL unnest(v.authchain_internal_ids) WITH ORDINALITY AS a(migration_transaction_internal_id, migration_element_number) ON TRUE;
COMMENT ON VIEW authchain_migrations_view IS 'A view which maps migration transactions to their index in a particular authchain.';
CREATE FUNCTION authchain_migration_transaction(authchain_migration_row authchain_migrations_view) RETURNS transaction
LANGUAGE sql IMMUTABLE
AS $$
SELECT * FROM transaction WHERE internal_id = $1.migration_transaction_internal_id LIMIT 1;
$$;
COMMENT ON FUNCTION authchain_migration_transaction (authchain_migrations_view) IS 'This function powers the "transaction.authchains[n].migrations[n].transaction" computed field in migration objects. This is a workaround to improve performance over an equivalent "transaction" standard Hasura relationship. When implemented as a relationship, the Hasura-compiled SQL query requires a full scan of the authchain_migrations_view, which is extremely large and expensive to compute.';

Bitauth functionality is implemented in less than 100 lines – I expect basic SLP validation to be fairly similar. (And we can probably copy quite a bit from how collect_authchains traces the chain(s) of transactions descending from a parent transaction.)

I've already built out some of the utility functions we'll need – reverse_bytea, encode_uint32le, encode_uint64le, encode_bitcoin_var_int – and you can get a sense for how parsing might work by looking through parse_bytecode_pattern (parses VM bytecode to extract opcode "patterns", cutting out all the pushed data):

-- Chaingraph Utilities
CREATE FUNCTION parse_bytecode_pattern(bytecode bytea) RETURNS bytea
LANGUAGE plpgsql IMMUTABLE
AS $$
DECLARE
pattern bytea := '\x'::bytea;
selected_byte integer;
scratch bytea;
i integer := 0;
bytecode_length integer := octet_length(bytecode);
BEGIN
WHILE i < bytecode_length LOOP
selected_byte := get_byte(bytecode, i);
pattern := pattern || substring(bytecode from (i + 1) for 1);
IF selected_byte > 78 OR selected_byte = 0 THEN
-- OP_0 (0) and all opcodes after OP_PUSHDATA_4 (78) are single-byte instructions
i := i + 1;
ELSIF selected_byte > 0 AND selected_byte <= 75 THEN
-- OP_PUSHBYTES_1 (1) through OP_PUSHBYTES_75 (75) directly indicate the length of pushed data
i := i + 1 + selected_byte;
ELSIF selected_byte = 76 THEN
IF bytecode_length - i < 2 THEN
-- malformed, return immediately
RETURN pattern;
END IF;
-- OP_PUSHDATA_1 reads one length-byte
i := i + 2 + get_byte(bytecode, (i + 1));
ELSIF selected_byte = 77 THEN
IF bytecode_length - i < 3 THEN
-- malformed, return immediately
RETURN pattern;
END IF;
-- OP_PUSHDATA_2 reads two length-bytes
scratch := substring(bytecode from (i + 2) for 2);
-- parse scratch as unsigned, two byte, little-endian number:
i := i + 3 + ((get_byte(scratch, 1) << 8) | get_byte(scratch, 0));
ELSIF selected_byte = 78 THEN
IF bytecode_length - i < 5 THEN
-- malformed, return immediately
RETURN pattern;
END IF;
-- OP_PUSHDATA_4 reads four length-bytes
scratch := substring(bytecode from (i + 2) for 4);
-- parse scratch as unsigned, four byte, little-endian number:
i := i + 5 + ((get_byte(scratch, 3) << 24) | (get_byte(scratch, 2) << 16) | (get_byte(scratch, 1) << 8) | get_byte(scratch, 0));
END IF;
END LOOP;
RETURN pattern;
END;
$$;
COMMENT ON FUNCTION parse_bytecode_pattern (bytea) IS 'Parse a sequence of bitcoin VM bytecode, extracting the first byte of each instruction. The resulting pattern excludes the contents of pushed values such that similar bytecode sequences produce the same pattern, even if different data or public keys are used.';

These type of functions can be used to create views, which are much friendlier interfaces for querying:

-- Built-in Analysis Views ---
-- These views aren't used in normal operation; they are included to make chain analysis easier.
-- Consider creating more focused, MATERIALIZED views for active research.
CREATE VIEW p2sh_output_view AS SELECT * FROM output WHERE parse_bytecode_pattern(output.locking_bytecode) = '\xa91487'::bytea;
COMMENT ON VIEW p2sh_output_view IS 'A view containing all outputs which match the P2SH locking bytecode pattern.';
CREATE VIEW p2sh_input_view AS
SELECT *, input_redeem_bytecode_pattern(input) AS redeem_bytecode_pattern
FROM input INNER JOIN p2sh_output_view
ON input.outpoint_transaction_hash = transaction_hash AND
input.outpoint_index = output_index;
COMMENT ON VIEW p2sh_output_view IS 'A view containing all inputs which spent P2SH outputs.';

Finally, there are examples of e2e tests for the existing Postgres functions here:

chaingraph/src/e2e/e2e.spec.ts

Lines 1052 to 1441 in 796ca96

/**
* These tests run concurrently after all serial tests have completed. They can
* also be run independently via `yarn test:e2e:postgres`.
*/
const bytecodeFunction: Macro<[string, string, string]> = async (
t,
functionName,
bytecodeHex,
patternHex
// eslint-disable-next-line max-params
) => {
const result = await client.query<{ encode: string }>(
/* sql */ `SELECT encode(${functionName} ($1), 'hex');`,
[hexToBin(bytecodeHex)]
);
t.deepEqual(result.rows[0].encode, patternHex);
};
bytecodeFunction.title = (
providedTitle,
functionName,
_bytecodeHex,
patternHex
// eslint-disable-next-line max-params
) => `[e2e] [postgres] ${functionName}${patternHex}: ${providedTitle ?? ''}`;
test(
'P2PKH',
bytecodeFunction,
'parse_bytecode_pattern',
'76a914000000000000000000000000000000000000000088ac',
'76a91488ac'
);
test(
'P2SH',
bytecodeFunction,
'parse_bytecode_pattern',
'a914000000000000000000000000000000000000000087',
'a91487'
);
test(
'OP_RETURN (fixed pushes)',
bytecodeFunction,
'parse_bytecode_pattern',
'6a04000000005120000000000000000000000000000000000000000000000000000000000000000004000000000400000000',
'6a0451200404'
);
test(
'OP_RETURN with OP_PUSHDATA1',
bytecodeFunction,
'parse_bytecode_pattern',
'6a026d0c090000000000000000004c5c0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000',
'6a02094c'
);
const allOnes = 0x11;
const minPushData2 = 256;
test(
'OP_RETURN with OP_PUSHDATA2',
bytecodeFunction,
'parse_bytecode_pattern',
`6a${binToHex(
encodeDataPush(new Uint8Array(minPushData2).fill(allOnes))
)}515151`,
'6a4d515151'
);
const minPushData4 = 65536;
test(
'OP_RETURN with OP_PUSHDATA4',
bytecodeFunction,
'parse_bytecode_pattern',
`6a${binToHex(
encodeDataPush(new Uint8Array(minPushData4).fill(allOnes))
)}515151`,
'6a4e515151'
);
test(
'malformed OP_PUSHBYTES',
bytecodeFunction,
'parse_bytecode_pattern',
'515102',
'515102'
);
test(
'malformed OP_PUSHDATA1',
bytecodeFunction,
'parse_bytecode_pattern',
'51514c',
'51514c'
);
test(
'malformed OP_PUSHDATA2',
bytecodeFunction,
'parse_bytecode_pattern',
'51514d11',
'51514d'
);
test(
'malformed OP_PUSHDATA4',
bytecodeFunction,
'parse_bytecode_pattern',
'51514e112233',
'51514e'
);
test(
'P2PKH',
bytecodeFunction,
'parse_bytecode_pattern_with_pushdata_lengths',
'76a914000000000000000000000000000000000000000088ac',
'76a91488ac'
);
test(
'P2SH',
bytecodeFunction,
'parse_bytecode_pattern_with_pushdata_lengths',
'a914000000000000000000000000000000000000000087',
'a91487'
);
test(
'OP_RETURN (fixed pushes)',
bytecodeFunction,
'parse_bytecode_pattern_with_pushdata_lengths',
'6a04000000005120000000000000000000000000000000000000000000000000000000000000000004000000000400000000',
'6a0451200404'
);
test(
'OP_RETURN with OP_PUSHDATA1 (memo.cash)',
bytecodeFunction,
'parse_bytecode_pattern_with_pushdata_lengths',
'6a026d0c090000000000000000004c5c0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000',
'6a02094c5c'
);
test(
'OP_RETURN with OP_PUSHDATA2',
bytecodeFunction,
'parse_bytecode_pattern_with_pushdata_lengths',
`6a${binToHex(
encodeDataPush(new Uint8Array(minPushData2).fill(allOnes))
)}515151`,
'6a4d0001515151'
);
test(
'OP_RETURN with OP_PUSHDATA4',
bytecodeFunction,
'parse_bytecode_pattern_with_pushdata_lengths',
`6a${binToHex(
encodeDataPush(new Uint8Array(minPushData4).fill(allOnes))
)}515151`,
'6a4e00000100515151'
);
test(
'malformed OP_PUSHBYTES',
bytecodeFunction,
'parse_bytecode_pattern_with_pushdata_lengths',
'515102',
'515102'
);
test(
'malformed OP_PUSHDATA1',
bytecodeFunction,
'parse_bytecode_pattern_with_pushdata_lengths',
'51514c',
'51514c'
);
test(
'malformed OP_PUSHDATA2',
bytecodeFunction,
'parse_bytecode_pattern_with_pushdata_lengths',
'51514d11',
'51514d'
);
test(
'malformed OP_PUSHDATA4',
bytecodeFunction,
'parse_bytecode_pattern_with_pushdata_lengths',
'51514e112233',
'51514e'
);
test(
'no redeem',
bytecodeFunction,
'parse_bytecode_pattern_redeem',
`0002000051`,
''
);
test(
'OP_PUSHBYTES redeem',
bytecodeFunction,
'parse_bytecode_pattern_redeem',
`0003019951`,
'0151'
);
const minPushData1 = 76;
test(
'OP_PUSHDATA1 redeem',
bytecodeFunction,
'parse_bytecode_pattern_redeem',
`00020000${binToHex(
encodeDataPush(
flattenBinArray([
hexToBin('00'),
encodeDataPush(new Uint8Array(minPushData1).fill(allOnes)),
hexToBin('515253'),
])
)
)}`,
'004c515253'
);
test(
'OP_PUSHDATA2 redeem',
bytecodeFunction,
'parse_bytecode_pattern_redeem',
`00020000${binToHex(
encodeDataPush(
flattenBinArray([
hexToBin('00'),
encodeDataPush(new Uint8Array(minPushData2).fill(allOnes)),
hexToBin('515253'),
])
)
)}`,
'004d515253'
);
test(
'OP_PUSHDATA4 redeem',
bytecodeFunction,
'parse_bytecode_pattern_redeem',
`00020000${binToHex(
encodeDataPush(
flattenBinArray([
hexToBin('00'),
encodeDataPush(new Uint8Array(minPushData4).fill(allOnes)),
hexToBin('515253'),
])
)
)}`,
'004e515253'
);
test('[e2e] [postgres] encode_uint16le', async (t) => {
const query = async (encoded: number) =>
(
await client.query<{ encode: string }>(
/* sql */ `SELECT encode(encode_uint16le ($1), 'hex');`,
[encoded]
)
).rows[0].encode;
/* eslint-disable @typescript-eslint/no-magic-numbers */
t.deepEqual(await query(0), '0000');
t.deepEqual(await query(1), '0100');
t.deepEqual(await query(2), '0200');
t.deepEqual(await query(254), 'fe00');
t.deepEqual(await query(255), 'ff00');
t.deepEqual(await query(256), '0001');
t.deepEqual(await query(1000), 'e803');
// cspell: disable-next-line
t.deepEqual(await query(65534), 'feff');
t.deepEqual(await query(65535), 'ffff');
/* eslint-enable @typescript-eslint/no-magic-numbers */
});
test('[e2e] [postgres] encode_uint32le', async (t) => {
const query = async (encoded: number) =>
(
await client.query<{ encode: string }>(
/* sql */ `SELECT encode(encode_uint32le ($1), 'hex');`,
[encoded]
)
).rows[0].encode;
/* eslint-disable @typescript-eslint/no-magic-numbers */
t.deepEqual(await query(0), '00000000');
t.deepEqual(await query(1), '01000000');
t.deepEqual(await query(536870912), '00000020');
t.deepEqual(await query(541065216), '00004020');
t.deepEqual(await query(545259520), '00008020');
t.deepEqual(await query(549453824), '0000c020');
t.deepEqual(await query(536928256), '00e00020');
t.deepEqual(await query(536870913), '01000020');
t.deepEqual(await query(536870914), '02000020');
t.deepEqual(await query(1073676288), '0000ff3f');
t.deepEqual(await query(1073733632), '00e0ff3f');
t.deepEqual(await query(2147483647), 'ffffff7f');
t.deepEqual(await query(2147483648), '00000080');
t.deepEqual(await query(2147483649), '01000080');
// cspell: disable-next-line
t.deepEqual(await query(4294967294), 'feffffff');
t.deepEqual(await query(4294967295), 'ffffffff');
/* eslint-enable @typescript-eslint/no-magic-numbers */
});
test('[e2e] [postgres] encode_int32le', async (t) => {
const query = async (encoded: number) =>
(
await client.query<{ encode: string }>(
/* sql */ `SELECT encode(encode_int32le ($1), 'hex');`,
[encoded]
)
).rows[0].encode;
/* eslint-disable @typescript-eslint/no-magic-numbers, line-comment-position */
t.deepEqual(await query(1), '01000000');
t.deepEqual(await query(2), '02000000');
t.deepEqual(await query(3), '03000000'); // version of TX: 110da331fd5336038316c4709404aea5855afed21f054f5bba01bfef099d5da1
t.deepEqual(await query(4), '04000000'); // version of TX: 6ae17e22dba03522126f9268de58de5a440ccdb334e137861f90766901e806fd
t.deepEqual(await query(2147483647), 'ffffff7f');
// cspell: disable-next-line
t.deepEqual(await query(-2), 'feffffff');
t.deepEqual(await query(-1), 'ffffffff');
t.deepEqual(await query(0), '00000000'); // version of TX: 64147d3d27268778c9d27aa434e8f270f96b2be859658950accde95a2f0ce79d
t.deepEqual(await query(-2147483648), '00000080');
t.deepEqual(await query(-2147483647), '01000080');
t.deepEqual(await query(-2130706433), 'ffffff80'); // version of TX: 35e79ee733fad376e76d16d1f10088273c2f4c2eaba1374a837378a88e530005
t.deepEqual(await query(-2107285824), 'c05e6582'); // version of TX: 637dd1a3418386a418ceeac7bb58633a904dbf127fa47bbea9cc8f86fef7413f
t.deepEqual(await query(-1703168784), 'f0b47b9a'); // version of TX: c659729a7fea5071361c2c1a68551ca2bf77679b27086cc415adeeb03852e369
/* eslint-enable @typescript-eslint/no-magic-numbers, line-comment-position */
});
test('[e2e] [postgres] encode_uint64le', async (t) => {
const query = async (encoded: bigint | number) =>
(
await client.query<{ encode: string }>(
/* sql */ `SELECT encode(encode_uint64le ($1), 'hex');`,
[encoded]
)
).rows[0].encode;
/* eslint-disable @typescript-eslint/no-magic-numbers */
t.deepEqual(await query(0), '0000000000000000');
t.deepEqual(await query(1), '0100000000000000');
t.deepEqual(await query(2), '0200000000000000');
t.deepEqual(await query(254), 'fe00000000000000');
t.deepEqual(await query(255), 'ff00000000000000');
t.deepEqual(await query(256), '0001000000000000');
t.deepEqual(await query(1000), 'e803000000000000');
t.deepEqual(await query(65534), 'feff000000000000');
t.deepEqual(await query(65535), 'ffff000000000000');
t.deepEqual(await query(0xffffffff), 'ffffffff00000000');
t.deepEqual(await query(BigInt('0xffffffffffff')), 'ffffffffffff0000');
t.deepEqual(await query(BigInt('9223372036854775807')), 'ffffffffffffff7f');
/* eslint-enable @typescript-eslint/no-magic-numbers */
});
test('[e2e] [postgres] encode_bitcoin_var_int', async (t) => {
const query = async (encoded: bigint | number) =>
(
await client.query<{ encode: string }>(
/* sql */ `SELECT encode(encode_bitcoin_var_int ($1), 'hex');`,
[encoded]
)
).rows[0].encode;
/* eslint-disable @typescript-eslint/no-magic-numbers */
t.deepEqual(await query(0), '00');
t.deepEqual(await query(1), '01');
t.deepEqual(await query(2), '02');
t.deepEqual(await query(251), 'fb');
t.deepEqual(await query(252), 'fc');
// cspell: disable-next-line
t.deepEqual(await query(253), 'fdfd00');
// cspell: disable-next-line
t.deepEqual(await query(254), 'fdfe00');
// cspell: disable-next-line
t.deepEqual(await query(255), 'fdff00');
t.deepEqual(await query(256), 'fd0001');
// cspell: disable-next-line
t.deepEqual(await query(65534), 'fdfeff');
// cspell: disable-next-line
t.deepEqual(await query(65535), 'fdffff');
t.deepEqual(await query(65536), 'fe00000100');
t.deepEqual(await query(65537), 'fe01000100');
t.deepEqual(await query(65538), 'fe02000100');
// cspell: disable-next-line
t.deepEqual(await query(4294967294), 'fefeffffff');
// cspell: disable-next-line
t.deepEqual(await query(4294967295), 'feffffffff');
t.deepEqual(await query(4294967296), 'ff0000000001000000');
t.deepEqual(await query(4294967297), 'ff0100000001000000');
t.deepEqual(await query(4294967298), 'ff0200000001000000');
t.deepEqual(await query(BigInt('9223372036854775806')), 'fffeffffffffffff7f');
t.deepEqual(await query(BigInt('9223372036854775807')), 'ffffffffffffffff7f');
/* eslint-enable @typescript-eslint/no-magic-numbers */
});

So hopefully that helps to orient interested contributors to how we might implement SLP support in Chaingraph.

API

I haven't thought enough yet about how the SLP API should work, but maybe:

  • output.slp – would be either NULL (if output is not a valid SLP SEND with an amount) or include amount, token_id, and token_type, and a token object relationship with the token's genesis details
  • one or more fields on transaction (prefixed with slp) – would be NULL if the transaction was not an SLP transaction, and would otherwise provide information on GENESIS, MINT, and COMMIT transaction types (not sure what this should look like yet)

And I think the output.slp field would be sufficient to make search_output work for wallets looking up both satoshi values and token values.

I'm still familiarizing myself with the SLP spec, so any ideas or feedback is highly appreciated!

Progress

@bitjson bitjson added the enhancement New feature or request label Nov 20, 2021
@bitjson
Copy link
Member Author

bitjson commented Nov 20, 2021

Cc: @scherrey and @blockparty-sh have both expressed interest in SLP support. I'd love to hear your thoughts on the API, and if you're interested in working on an implementation, I'd be very happy to work with you. (Either here or in the Chaingraph Devs Telegram group.)

@christroutner
Copy link

As I understand it, there are two parts to validating an SLP transaction:

  • The OP_RETURN data needs to comply with the specification. This can be validated using the slp-parser library.
  • The DAG back to the GENESIS transaction needs to be validated.

That second part is the more computationally intense operation.

The primary metric that I'm focused on with the psf-slp-indexer is memory usage. A secondary metric is output speed. Using LevelDB accomplishes both goals: it keeps the memory footprint to around 2GB or less, and the output is simply key-value lookup, so it's very fast.

I'll be interested in following the development of Chaingraph and its ability to track SLP transactions, but I don't think I'll be able to contribute very much. If there is room for synergy, or code that can be shared between projects, I'm happy to contribute on that front.

@bitjson
Copy link
Member Author

bitjson commented Nov 23, 2021

Thanks for chiming in @christroutner, I'll look forward to seeing that develop.

Just to add to this issue: I think we can get SLP support working reasonably with just a few Postgres functions, but #29 would allow us to precompute everything and make lookups instant.

So I'll probably consider there to be 2 stages for this issue:

@bitjson bitjson pinned this issue Nov 24, 2021
@bitjson
Copy link
Member Author

bitjson commented Jan 20, 2023

With CashTokens now supported (3ebddce), I don't expect to ever add SLP support to Chaingraph.

I'd be happy to take a PR if anyone is interested, but going to close this issue for now.

@bitjson bitjson closed this as completed Jan 20, 2023
@bitjson bitjson unpinned this issue Jan 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants