Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Recoverable keys feature for the dashboard #2110

Closed
wants to merge 51 commits into from
Closed
Show file tree
Hide file tree
Changes from 16 commits
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
75db25a
Recoverable keys feature for the dashboard
harshsbhat Sep 17, 2024
365117e
Merge branch 'main' into harshbhat/recoverable-keys
harshsbhat Sep 17, 2024
ab6ad58
Checking for the Store_encrypted_keys from teh dashboard too
harshsbhat Sep 18, 2024
197fe6e
Merge branch 'main' into harshbhat/recoverable-keys
harshsbhat Sep 18, 2024
b795941
Show keyId after creating a new key
harshsbhat Sep 17, 2024
e1a92c2
lint
harshsbhat Sep 17, 2024
3379c0e
Requested changes
harshsbhat Sep 18, 2024
7cd8dc6
Removed unused component
harshsbhat Sep 18, 2024
6f903de
Add the actual link in the description
harshsbhat Sep 18, 2024
43aa747
Remove wrong rebase
harshsbhat Sep 18, 2024
e89dfa3
merge conflicts
harshsbhat Sep 18, 2024
c8d08d0
Fixing pnpm lock file
harshsbhat Sep 18, 2024
babbde4
Merge remote-tracking branch 'origin/main' into harshbhat/recoverable…
harshsbhat Sep 18, 2024
4b74619
Error handling for encrypt fail as well as disabled store encrypt keys
harshsbhat Sep 19, 2024
596b315
Merge branch 'main' into harshbhat/recoverable-keys
harshsbhat Sep 19, 2024
7ffdbc4
fix: cf cache ratelimits (#2112)
chronark Sep 19, 2024
7028fc4
Merge remote-tracking branch 'origin/main' into harshbhat/recoverable…
harshsbhat Sep 19, 2024
2d6d304
Disabling the card if store encrypted is disabled
harshsbhat Sep 19, 2024
68c6599
[autofix.ci] apply automated fixes
autofix-ci[bot] Sep 19, 2024
ee40896
Merge branch 'main' into harshbhat/recoverable-keys
harshsbhat Sep 20, 2024
ca22641
Added transaction
harshsbhat Sep 20, 2024
2bcfb89
Merge branch 'main' into harshbhat/recoverable-keys
harshsbhat Sep 21, 2024
9302c62
Merge branch 'main' into harshbhat/recoverable-keys
harshsbhat Sep 21, 2024
9699b48
Merge branch 'main' into harshbhat/recoverable-keys
harshsbhat Sep 24, 2024
c46e0a0
Merge branch 'main' into harshbhat/recoverable-keys
harshsbhat Sep 24, 2024
77b5e03
Merge branch 'main' into harshbhat/recoverable-keys
harshsbhat Sep 24, 2024
ee5e94a
Change the display after creating the new key with recoverable
harshsbhat Sep 24, 2024
fce16ae
Merge branch 'main' into harshbhat/recoverable-keys
harshsbhat Sep 25, 2024
f9877c7
Update apps/dashboard/lib/trpc/routers/key/create.ts
harshsbhat Sep 27, 2024
314ac7a
Merge remote-tracking branch 'origin/main' into harshbhat/recoverable…
harshsbhat Sep 28, 2024
bf876bc
Fix link and remove licence from vault
harshsbhat Sep 28, 2024
441a038
Merge branch 'main' into harshbhat/recoverable-keys
harshsbhat Oct 1, 2024
8125c72
[autofix.ci] apply automated fixes
autofix-ci[bot] Oct 1, 2024
050145c
feat: Added support to sidebar (#2119)
harshsbhat Oct 2, 2024
0535b2b
chore(email): update styling, copy, urls (#2141)
mcstepp Oct 2, 2024
4944c5f
chore: Delay email and upgrade Resend (#2152)
perkinsjr Oct 2, 2024
3a5bf01
fix: make keys UI responsive to mobile devices (#2145)
geraldmaboshe Oct 2, 2024
1545e7b
chore: Update to dot com (#2155)
perkinsjr Oct 2, 2024
b15c192
Update email to be "James from Unkey <[email protected]>"
perkinsjr Oct 2, 2024
bf1dcf7
chore: update legacy_keys_verifyKey.ts (#2158)
eltociear Oct 3, 2024
e7f6bf1
feat: added README.md in /packages/api (#2153)
Abhi-Bohora Oct 3, 2024
a78085a
chore: create changeset
chronark Oct 3, 2024
e8f0f5d
chore(release): version packages (#2160)
github-actions[bot] Oct 3, 2024
7da6d0d
Recoverable keys feature for the dashboard
harshsbhat Sep 17, 2024
72f9983
Checking for the Store_encrypted_keys from teh dashboard too
harshsbhat Sep 18, 2024
69942ca
Show keyId after creating a new key
harshsbhat Sep 17, 2024
ebea66b
Requested changes
harshsbhat Sep 18, 2024
6a35257
merge conflicts
harshsbhat Sep 18, 2024
c739b68
Fixing pnpm lock file
harshsbhat Sep 18, 2024
c25761f
Error handling for encrypt fail as well as disabled store encrypt keys
harshsbhat Sep 19, 2024
5db2d9a
fix: cf cache ratelimits (#2112)
chronark Sep 19, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
package identities

import (
"context"
"fmt"
"os"
"testing"
"time"

"github.com/stretchr/testify/require"
unkey "github.com/unkeyed/unkey-go"
"github.com/unkeyed/unkey-go/models/components"
"github.com/unkeyed/unkey-go/models/operations"
attack "github.com/unkeyed/unkey/apps/agent/pkg/testutil"
"github.com/unkeyed/unkey/apps/agent/pkg/uid"
"github.com/unkeyed/unkey/apps/agent/pkg/util"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update import paths to use module references instead of relative paths

The import paths for internal packages use relative paths, which may cause issues with module resolution. Consider updating them to use module references.

Apply this diff to update the import paths:

-attack "github.com/unkeyed/unkey/apps/agent/pkg/testutil"
-"github.com/unkeyed/unkey/apps/agent/pkg/uid"
-"github.com/unkeyed/unkey/apps/agent/pkg/util"
+attack "unkey/apps/agent/pkg/testutil"
+"unkey/apps/agent/pkg/uid"
+"unkey/apps/agent/pkg/util"

Ensure that the module path unkey is correctly specified in your go.mod file.

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
attack "github.com/unkeyed/unkey/apps/agent/pkg/testutil"
"github.com/unkeyed/unkey/apps/agent/pkg/uid"
"github.com/unkeyed/unkey/apps/agent/pkg/util"
attack "unkey/apps/agent/pkg/testutil"
"unkey/apps/agent/pkg/uid"
"unkey/apps/agent/pkg/util"

)

func TestIdentitiesRatelimitAccuracy(t *testing.T) {
// Step 1 --------------------------------------------------------------------
// Setup the sdk, create an API and an identity
// ---------------------------------------------------------------------------

ctx := context.Background()
rootKey := os.Getenv("INTEGRATION_TEST_ROOT_KEY")
require.NotEmpty(t, rootKey, "INTEGRATION_TEST_ROOT_KEY must be set")
baseURL := os.Getenv("UNKEY_BASE_URL")
require.NotEmpty(t, baseURL, "UNKEY_BASE_URL must be set")

sdk := unkey.New(
unkey.WithServerURL(baseURL),
unkey.WithSecurity(rootKey),
)

for _, nKeys := range []int{1} { //, 3, 10, 1000} {
t.Run(fmt.Sprintf("with %d keys", nKeys), func(t *testing.T) {

for _, tc := range []struct {
rate attack.Rate
testDuration time.Duration
}{
{
rate: attack.Rate{Freq: 20, Per: time.Second},
testDuration: 1 * time.Minute,
},
{
rate: attack.Rate{Freq: 100, Per: time.Second},
testDuration: 5 * time.Minute,
},
perkinsjr marked this conversation as resolved.
Show resolved Hide resolved
} {
t.Run(fmt.Sprintf("[%s] over %s", tc.rate.String(), tc.testDuration), func(t *testing.T) {
api, err := sdk.Apis.CreateAPI(ctx, operations.CreateAPIRequestBody{
Name: uid.New("testapi"),
})
require.NoError(t, err)

externalId := uid.New("testuser")

_, err = sdk.Identities.CreateIdentity(ctx, operations.CreateIdentityRequestBody{
ExternalID: externalId,
Meta: map[string]any{
"email": "[email protected]",
},
})
require.NoError(t, err)

// Step 2 --------------------------------------------------------------------
// Update the identity with ratelimits
// ---------------------------------------------------------------------------

inferenceLimit := operations.UpdateIdentityRatelimits{
Name: "inferenceLimit",
Limit: 100,
Duration: time.Minute.Milliseconds(),
}

_, err = sdk.Identities.UpdateIdentity(ctx, operations.UpdateIdentityRequestBody{
ExternalID: unkey.String(externalId),
Ratelimits: []operations.UpdateIdentityRatelimits{inferenceLimit},
})
require.NoError(t, err)

// Step 4 --------------------------------------------------------------------
// Create keys that share the same identity and therefore the same ratelimits
// ---------------------------------------------------------------------------

keys := make([]operations.CreateKeyResponseBody, nKeys)
for i := 0; i < len(keys); i++ {
key, err := sdk.Keys.CreateKey(ctx, operations.CreateKeyRequestBody{
APIID: api.Object.APIID,
ExternalID: unkey.String(externalId),
Environment: unkey.String("integration_test"),
})
require.NoError(t, err)
keys[i] = *key.Object
}
perkinsjr marked this conversation as resolved.
Show resolved Hide resolved

// Step 5 --------------------------------------------------------------------
// Test ratelimits
// ---------------------------------------------------------------------------

total := 0
passed := 0

results := attack.Attack(t, tc.rate, tc.testDuration, func() bool {

// Each request uses one of the keys randomly
key := util.RandomElement(keys).Key

res, err := sdk.Keys.VerifyKey(context.Background(), components.V1KeysVerifyKeyRequest{
APIID: unkey.String(api.Object.APIID),
Key: key,
Ratelimits: []components.Ratelimits{
{Name: inferenceLimit.Name},
},
})
require.NoError(t, err)
perkinsjr marked this conversation as resolved.
Show resolved Hide resolved

return res.V1KeysVerifyKeyResponse.Valid

})

for valid := range results {
total++
if valid {
passed++
}

}

// Step 6 --------------------------------------------------------------------
// Assert ratelimits worked
// ---------------------------------------------------------------------------

exactLimit := int(inferenceLimit.Limit) * int(tc.testDuration/(time.Duration(inferenceLimit.Duration)*time.Millisecond))
upperLimit := int(1.2 * float64(exactLimit))
lowerLimit := exactLimit
if total < lowerLimit {
lowerLimit = total
}
t.Logf("Total: %d, Passed: %d, lowerLimit: %d, exactLimit: %d, upperLimit: %d", total, passed, lowerLimit, exactLimit, upperLimit)

// check requests::api is not exceeded
require.GreaterOrEqual(t, passed, lowerLimit)
require.LessOrEqual(t, passed, upperLimit)
perkinsjr marked this conversation as resolved.
Show resolved Hide resolved
})
}
})
}
}
12 changes: 4 additions & 8 deletions apps/agent/integration/identities/token_ratelimits_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,14 +28,10 @@ func TestClusterRatelimitAccuracy(t *testing.T) {
baseURL := os.Getenv("UNKEY_BASE_URL")
require.NotEmpty(t, baseURL, "UNKEY_BASE_URL must be set")

options := []unkey.SDKOption{
sdk := unkey.New(
unkey.WithServerURL(baseURL),
unkey.WithSecurity(rootKey),
}

if baseURL != "" {
options = append(options, unkey.WithServerURL(baseURL))
}
sdk := unkey.New(options...)
)

api, err := sdk.Apis.CreateAPI(ctx, operations.CreateAPIRequestBody{
Name: uid.New("testapi"),
Expand All @@ -47,7 +43,7 @@ func TestClusterRatelimitAccuracy(t *testing.T) {
_, err = sdk.Identities.CreateIdentity(ctx, operations.CreateIdentityRequestBody{
ExternalID: externalId,
Meta: map[string]any{
"email": "[email protected]",
"email": "[email protected]",
},
})
require.NoError(t, err)
Expand Down
125 changes: 125 additions & 0 deletions apps/agent/integration/keys/ratelimits_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
package keys_test

import (
"context"
"fmt"
"os"
"testing"
"time"

"github.com/stretchr/testify/require"
unkey "github.com/unkeyed/unkey-go"
"github.com/unkeyed/unkey-go/models/components"
"github.com/unkeyed/unkey-go/models/operations"
attack "github.com/unkeyed/unkey/apps/agent/pkg/testutil"
"github.com/unkeyed/unkey/apps/agent/pkg/uid"
"github.com/unkeyed/unkey/apps/agent/pkg/util"
)

func TestDefaultRatelimitAccuracy(t *testing.T) {
// Step 1 --------------------------------------------------------------------
// Setup the sdk, create an API and a key
// ---------------------------------------------------------------------------

ctx := context.Background()
rootKey := os.Getenv("INTEGRATION_TEST_ROOT_KEY")
require.NotEmpty(t, rootKey, "INTEGRATION_TEST_ROOT_KEY must be set")
baseURL := os.Getenv("UNKEY_BASE_URL")
require.NotEmpty(t, baseURL, "UNKEY_BASE_URL must be set")
perkinsjr marked this conversation as resolved.
Show resolved Hide resolved

options := []unkey.SDKOption{
unkey.WithSecurity(rootKey),
}

if baseURL != "" {
options = append(options, unkey.WithServerURL(baseURL))
}
sdk := unkey.New(options...)

for _, tc := range []struct {
rate attack.Rate
testDuration time.Duration
}{
{
rate: attack.Rate{Freq: 20, Per: time.Second},
testDuration: 1 * time.Minute,
},
{
rate: attack.Rate{Freq: 100, Per: time.Second},
testDuration: 5 * time.Minute,
},
} {
t.Run(fmt.Sprintf("[%s] over %s", tc.rate.String(), tc.testDuration), func(t *testing.T) {
api, err := sdk.Apis.CreateAPI(ctx, operations.CreateAPIRequestBody{
Name: uid.New("testapi"),
})
require.NoError(t, err)

// Step 2 --------------------------------------------------------------------
// Update the identity with ratelimits
// ---------------------------------------------------------------------------

// Step 3 --------------------------------------------------------------------
// Create keys that share the same identity and therefore the same ratelimits
// ---------------------------------------------------------------------------

ratelimit := operations.Ratelimit{
Limit: 100,
Duration: util.Pointer(time.Minute.Milliseconds()),
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hardcoded rate limit values reduce test flexibility

The rate limit applied to the key is hardcoded and does not vary with test cases.

Currently, the ratelimit is set to:

ratelimit := operations.Ratelimit{
	Limit:    100,
	Duration: util.Pointer(time.Minute.Milliseconds()),
}

Consider parameterizing the ratelimit based on the test case to make the test more flexible and cover different rate limit scenarios:

- ratelimit := operations.Ratelimit{
- 	Limit:    100,
- 	Duration: util.Pointer(time.Minute.Milliseconds()),
- }
+ ratelimit := operations.Ratelimit{
+ 	Limit:    int64(tc.rate.Freq * int(tc.rate.Per.Seconds())),
+ 	Duration: util.Pointer(tc.rate.Per.Milliseconds()),
+ }

This change allows each test case to define its own rate limit, aligning with the attack.Rate used.


key, err := sdk.Keys.CreateKey(ctx, operations.CreateKeyRequestBody{
APIID: api.Object.APIID,
Ratelimit: &ratelimit,
})
require.NoError(t, err)

// Step 5 --------------------------------------------------------------------
// Test ratelimits
// ---------------------------------------------------------------------------

total := 0
passed := 0

results := attack.Attack(t, tc.rate, tc.testDuration, func() bool {

res, err := sdk.Keys.VerifyKey(context.Background(), components.V1KeysVerifyKeyRequest{
APIID: unkey.String(api.Object.APIID),
Key: key.Object.Key,
Ratelimits: []components.Ratelimits{
{Name: "default"},
},
})
require.NoError(t, err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential misuse of require inside concurrent function

Using require.NoError(t, err) inside the function passed to attack.Attack may lead to unexpected behavior in concurrent tests.

The require package's functions like require.NoError call t.FailNow() on failure, which might not work as expected inside goroutines or concurrent functions. This can cause the test to pass even when it should fail.

Consider capturing the error and returning it to the main test function for proper assertion.

-res, err := sdk.Keys.VerifyKey(context.Background(), components.V1KeysVerifyKeyRequest{
+res, err := sdk.Keys.VerifyKey(ctx, components.V1KeysVerifyKeyRequest{
	APIID: unkey.String(api.Object.APIID),
	Key:   key.Object.Key,
	Ratelimits: []components.Ratelimits{
		{Name: "default"},
	},
})
-require.NoError(t, err)
+if err != nil {
+    t.Errorf("VerifyKey failed: %v", err)
+    return false
+}

Alternatively, ensure that all assertions within the function are safe to use in concurrent contexts or adjust attack.Attack to handle errors appropriately.

Committable suggestion was skipped due to low confidence.


return res.V1KeysVerifyKeyResponse.Valid

})
perkinsjr marked this conversation as resolved.
Show resolved Hide resolved

for valid := range results {
total++
if valid {
passed++
}

}

// Step 6 --------------------------------------------------------------------
// Assert ratelimits worked
// ---------------------------------------------------------------------------

exactLimit := int(ratelimit.Limit) * int(tc.testDuration/(time.Duration(*ratelimit.Duration)*time.Millisecond))
upperLimit := int(1.2 * float64(exactLimit))
lowerLimit := exactLimit
if total < lowerLimit {
lowerLimit = total
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logical error in lowerLimit adjustment

Adjusting lowerLimit to total when total < lowerLimit may lead to incorrect assertions, potentially masking test failures.

At lines 114-116:

if total < lowerLimit {
	lowerLimit = total
}

This adjustment allows passed to always be greater than or equal to lowerLimit, which can render the assertion at line 120 ineffective. Instead, consider reviewing the logic to ensure that the lowerLimit accurately reflects the expected minimum number of successful requests.

For example, you might want to remove the adjustment or revise it based on the test's intent.

t.Logf("Total: %d, Passed: %d, lowerLimit: %d, exactLimit: %d, upperLimit: %d", total, passed, lowerLimit, exactLimit, upperLimit)

// check requests::api is not exceeded
require.GreaterOrEqual(t, passed, lowerLimit)
require.LessOrEqual(t, passed, upperLimit)
})

}
}
2 changes: 1 addition & 1 deletion apps/agent/pkg/circuitbreaker/lib.go
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ func (cb *CB[Res]) preflight(ctx context.Context) error {
return ErrTripped
}

cb.logger.Info().Str("state", string(cb.state)).Int("requests", cb.requests).Int("maxRequests", cb.config.maxRequests).Msg("circuit breaker state")
cb.logger.Debug().Str("state", string(cb.state)).Int("requests", cb.requests).Int("maxRequests", cb.config.maxRequests).Msg("circuit breaker state")
if cb.state == HalfOpen && cb.requests >= cb.config.maxRequests {
return ErrTooManyRequests
}
Expand Down
1 change: 0 additions & 1 deletion apps/agent/pkg/clickhouse/flush.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@ func flush[T any](ctx context.Context, conn ch.Conn, table string, rows []T) err
return fault.Wrap(err, fmsg.With("preparing batch failed"))
}
for _, row := range rows {
fmt.Printf("row: %+v\n", row)
err = batch.AppendStruct(&row)
if err != nil {
return fault.Wrap(err, fmsg.With("appending struct to batch failed"))
Expand Down
69 changes: 69 additions & 0 deletions apps/agent/pkg/testutil/attack.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
package attack

import (
"fmt"
"sync"
"testing"
"time"
)

type Rate struct {
Freq int
Per time.Duration
}

func (r Rate) String() string {
return fmt.Sprintf("%d per %s", r.Freq, r.Per)
}

// Attack executes the given function at the given rate for the given duration
// and returns a channel on which the results are sent.
//
// The caller must process the results as they arrive on the channel to avoid
// blocking the worker goroutines.
func Attack[Response any](t *testing.T, rate Rate, duration time.Duration, fn func() Response) <-chan Response {
t.Log("attacking")
wg := sync.WaitGroup{}
workers := 256
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider making workers configurable

Currently, the number of worker goroutines is hardcoded to 256:

workers := 256

To make the function more flexible and adaptable to different testing environments, consider making workers a parameter or deriving it from the available CPU cores:

workers := runtime.NumCPU()

Don't forget to import the runtime package:

import (
    "fmt"
    "runtime"
    "sync"
    "testing"
    "time"
)


ticks := make(chan struct{})
responses := make(chan Response)

totalRequests := rate.Freq * int(duration/rate.Per)
dt := rate.Per / time.Duration(rate.Freq)

wg.Add(totalRequests)

go func() {
for i := 0; i < totalRequests; i++ {
ticks <- struct{}{}
time.Sleep(dt)
}
}()

for i := 0; i < workers; i++ {
go func() {
for range ticks {
responses <- fn()
wg.Done()

}
}()
}

go func() {
wg.Wait()
t.Log("attack done, waiting for responses to be processed")

close(ticks)
pending := len(responses)
for pending > 0 {
t.Logf("waiting for responses to be processed: %d", pending)
time.Sleep(100 * time.Millisecond)
}
close(responses)
perkinsjr marked this conversation as resolved.
Show resolved Hide resolved

}()

return responses
}
Loading