diff --git a/docs/docs/cli-commands.md b/docs/docs/cli-commands.md index 20df2be3a0c6..b7157436ffba 100644 --- a/docs/docs/cli-commands.md +++ b/docs/docs/cli-commands.md @@ -1685,6 +1685,7 @@ yarn redwood setup | Commands | Description | | ------------------ | ------------------------------------------------------------------------------------------ | | `auth` | Set up auth configuration for a provider | +| `cache` | Set up cache configuration for memcached or redis | | `custom-web-index` | Set up an `index.js` file, so you can customize how Redwood web is mounted in your browser | | `deploy` | Set up a deployment configuration for a provider | | `generator` | Copy default Redwood generator templates locally for customization | @@ -1728,6 +1729,20 @@ yarn redwood setup graphiql | `--expiry, -e` | Token expiry in minutes. Default is 60 | | `--view, -v` | Print out generated headers to console | + +### setup cache + +This command creates a setup file in `api/src/lib/cache.{ts|js}` for connecting to a Memcached or Redis server and allows caching in services. See the [**Caching** section of the Services docs](/docs/services#caching) for usage. + +``` +yarn redwood setup cache +``` + +| Arguments & Options | Description | +| :------------------ | :----------------------- | +| `client` | Name of the client to configure, `memcached` or `redis` | +| `--force, -f` | Overwrite existing files | + ### setup custom-web-index Redwood automatically mounts your `` to the DOM, but if you want to customize how that happens, you can use this setup command to generate an `index.js` file in `web/src`. diff --git a/docs/docs/services.md b/docs/docs/services.md index 4967d7ad3ab5..473bf8f5eed5 100644 --- a/docs/docs/services.md +++ b/docs/docs/services.md @@ -725,3 +725,306 @@ const createPost = (input) => { ``` This makes sure that the user that's logged in and creating the post cannot reuse the same blog post title as one of their own posts. + +## Caching + +Redwood provides a simple [LRU cache](https://www.baeldung.com/java-lru-cache) for your services. With an LRU cache you never need to worry about manually expiring or updating cache items. You either read an existing item (if its **key** is found) or create a new cached item if it isn't. This means that over time the cache will get bigger and bigger until it hits a memory or disk usage limit, but you don't care: the cache software is responsible for removing the oldest/least used members to make more room. For many applications, its entire database resultset may fit in cache! + +How does a cache work? At its simplest, a cache is just a big chunk of memory or disk that stores key/value pairs. A unique key is used to lookup a value—the value being what you wanted to cache. The trick with a cache is selecting a key that makes the data unique among all the other data being cached, but that it itself (the key) contains enough uniqueness that you can safely discard it when something in the computed value changes, and you want to save a new value instead. More on that in [Choosing a Good Key](#choosing-a-good-key) below. + +Why use a cache? If you have an expensive or time-consuming process in your service that doesn't change on every request, this is a great candidate. For example, for a store front, you may want to show the most popular products. This may be computed by a combination of purchases, views, time spent on the product page, social media posts, or a whole host of additional information. Aggregating this data may take seconds or more, but the list of popular products probably doesn't change that often. There's no reason to make every user wait all that time just to see the same list of products. With service caching, just wrap this computation in the `cache()` function, and give it an expiration time of 24 hours, and now the result is returned in milliseconds for every user (except the first one in a 24 hour period, it has to be computed from scratch and then stored in the cache again). You can even remove this first user's wait by "warming" the cache: trigging the service function by a process you run on the server, rather than by a user's first visit, on a 24 hour schedule so that it's the one that ends up waiting for the results to be computed. + +:::info What about GraphQL caching? + +You could also cache data at the [GraphQL layer](https://community.redwoodjs.com/t/guide-power-of-graphql-caching/2624) which has some of the same benefits. Using Envelop plugins you can add a response cache _after_ your services (resolver functions in the context of GraphQL) run - with a global configuration. + +However, by placing the cache one level "lower," at the service level, you get the benefit of caching even when one service calls another internally, or when a service is called via another serverless function, and finer grained control of what you're caching. + +In our example above you could cache the GraphQL query for the most popular products. But if you had an internal admin function which was a different query, augmenting the popular products with additional information, you now need to cache that query as well. With service caching, that admin service function can call the same popular product function that's already cached and get the speed benefit automatically. + +::: + +### Clients + +As of this writing, Redwood ships with clients for the two most popular cache backends: [Memcached](https://memcached.org/) and [Redis](https://redis.io/). Service caching wraps each of these in an adapter, which makes it easy to add more clients in the future. If you're interested in adding an adapter for your favorite cache client, [open a issue](https://github.com/redwoodjs/redwood/issues) and tell us about it! Instructions for getting started with the code are [below](#creating-your-own-client). + +### What Can Be Cached + +The service cache mechanism can only store strings, so whatever data you want to cache needs to be able to survive a round trip through `JSON.stringify()` and `JSON.parse()`. That means that if you have a real `Date` instance, you'd need to re-initialize it as a `Date`, because it's going to return from the cache as a string like `"2022-08-24T17:50:05.679Z"`. + +A function will not survive being serialized as a string so those are right out. + +Most Prisma datasets can be serialized just fine, as long as you're mindful of dates and things like BLOBs, which may contain binary data and could get mangled. + +We have an [outstanding issue](https://github.com/redwoodjs/redwood/issues/6282) which will add support for caching instances of custom classes and getting them back out of the cache as that instance, rather than a generic object which you would normally get after a `JSON.stringify`! + +### Expiration + +You can set a number of seconds after which to automatically expire the key. After this time the call to `cache()` will set the key/value in the store again. See the function descriptions below for usage examples. + +### Choosing a Good Key + +As the old saying goes "there are only two hard problems in computer science: cache, and naming things." The reason cache is included in this list is, funnily enough, many times because of naming something—the key for the cache. + +Consider a product that you want to cache. At first thought you may think "I'll use the name of the product as its key" and so your key is `led light strip`. One problem is that you must make absolutely sure that your product name is unique across your shop. This may not be a viable solution for your store: you could have two manufacturers with the same product name. + +Okay, let's use the product's database ID as the key: `41443`. It's definitely going to be unique now, but what if you later add a cache for users? Could a user record in the database have that same ID? Probably, so now you may think you're retrieving a cached user, but you'll get the product instead. + +What if we add a "type" into the cache key, so we know what type of thing we're caching: `product-41442`. Now we're getting somewhere. Users will have a cache key `user-41442` and the two won't clash. But what happens if you change some data about that product, like the description? Remember that we can only get an existing key/value, or create a key/value in the cache, we can't update an existing key. How we can encapsulate the "knowledge" that a product's data has changed into the cache key? + +One solution would be to put all of the data that we care about changing into the key, like: `product-41442-${description}`. The problem here is that keys can only be so long (in Memcached it's 250 bytes). Another option could be to hash the entire product object and use that as the key (this can encompass the `product` part of the key as well as the ID itself, since *any* data in the object being different will result in a new hash): + +```js +import { md5 } from "blueimp-md5" + +cache(md5(JSON.stringify(product)), () => { + // ... +}) +``` + +This works, but it's the nicest to look at in the code, and computing a hash isn't free (it's fast, but not 0 seconds). + +For this reason we always recommend that you add an `updatedAt` column to all of your models. This will automatically be set by Prisma to a timestamp whenever it updates that row in the database. This means we can count on this value being different whenever a record changes, regardless of what column actually changed. Now our key can look like `product-${id}-${updatedAt.getTime()}`. We use `getTime()` so that the timestamp is returned as a nice integer `1661464626032` rather than some string like `Thu Aug 25 2022 14:56:25 GMT-0700 (Pacific Daylight Time)`. + +:::info + +If you're using [Redwood Record](/docs/redwoodrecord) pretty soon you'll be able to cache a record by just passing the instance as the key, and it will automatically create the same key behind the scenes for you: + +```js +cache(product, () => { + // ... +}) +``` +::: + +One drawback to this key is in potentially responding to *too many* data changes, even ones we don't care about caching. Imagine that a product has a `views` field that tracks how many times it has been viewed in the browser. This number will be changing all the time, but if we don't display that count to the user then we're constantly re-creating the cache for the product even though no data the user will see is changing. There's no way to tell Prisma "set the `updatedAt` when the record changes, but not if the `views` column changes." This cache key is too variable. One solution would be to move the `views` column to another table with a `productId` pointing back to this record. Now the `product` is back to just containing data we care about caching. + +What if you want to expire a cache regardless of whether the data itself has changed? Maybe you make a UI change where you now show a product's SKU on the page where you didn't before. You weren't previously selecing the `sku` field out of the database, and so it hasn't been cached. But now that you're showing it you'll need to add it the list of fields to return from the service. One solution would be forceably update all of the `updatedAt` fields in the database. But a) Prisma won't easily let you do this since it think it controls that column, and b) every product is going to appear to have been edited at the same time, when in fact nothing changed—you just needed to bust the cache. + +An easier solution to this problem would be to add some kind of version number to your cache key that you are in control of and can change whenever you like. Something like appending a `v1` to the key: `v1-product-${id}-${updatedAt}` + +And this key is our final form: a unique, but flexible key that allows us to expire the cache on demand (change the version) or automatically expire it when the record itself changes. + +#### Expiration-based Keys + +You can skirt these issues about what data is changing and what to include or not include in the key by just setting an expiration time on this cache entry. You may decide that if a change is made to a product, it's okay if users don't see the change for, say, an hour. In this case just set the expiration time to 3600 seconds and it will automatically be re-built, whether something changed in the record or not: + +```js +cache(`product-${id}`, () => { + // ... +}, { expires: 3600 }) +``` + +This leads to your product cache being rebuilt every hour, even though you haven't made any changes that are of consequence to the user. But that may be we worth the tradeoff versus rebuilding the cache when *no* useful data has changed (like the `views` column being updated). + +#### Global Cache Key Prefix + +Just like the `v1` we added to the `product` cache key above, you can globally prefix a string to *all* of your cache keys: + +```js title=api/src/lib/cache.js +export const { cache, cacheFindMany } = createCache(client, { + logger, + timeout: 500, + // highlight-next-line + prefix: 'alpha', +}) +``` + +This would turn a key like `posts-123` into `alpha-posts-123` before giving it to the cache client. If you prefixed with `v1` in the individual cache key, you'd now have `alpha-v1-posts-123`. + +This gives you a nuclear option to invalidate all cache keys globally in your app. Let's say you launched a new redesign, or other visual change to your site where you may be showing more or less data from your GraphQL queries. If your data was purely based on the DB data (like `id` and `updatedAt`) there would be no way to refresh all of these keys without changing each and every cache key manually in every service, or by manually updating *all* `updatedAt` timestamps in the database. This gives you a fallback to refreshing all data at once. + +#### Caching User-specific Data + +Sometimes you want to cache data unique to a user. Imagine a Recommended Products feature on our store: it should recommend products based on the user's previous purchase history, views, etc. In this case we'd way to include something unique about the user itself in the key: + +```js +cache(`recommended-${context.currentUser.id}`, () => { + // ... +}) +``` + +If every page the user visits has a different list of recommended products then creating this cache may not be worth it: how often does the user revisit the same product page more than once? Conversely, if you show the *same* recommended products on every page then this cache would definitely improve the user's experience. + +The *key* to writing a good key (!) is to think carefully about the circumstances in which the key needs to expire, and include those bits of information into the key string/array. Adding caching can lead to weird bugs you don't expect, but in these cases the root cause will usually be the cache key not containing enough bits of information to expire it correctly. When in doubt, restart the app with the cache server (memcached or redis) disabled and see if the same behavior is still present. If not, the cache key is the culprit! + +### Setup + +We have a setup command which creates a file `api/src/lib/cache.js` and include basic initialization for Memcached or Redis: + +```bash +yarn rw setup cache memcached +yarn rw setup cache redis +``` + +This generates the following (memcached example shown): + +```js title=api/src/lib/cache.js +import { createCache, MemcachedClient } from '@redwoodjs/api/cache' + +import { logger } from './logger' + +const memJsFormattedLogger = { + log: (msg) => logger.error(msg), +} + +let client +try { + client = new MemcachedClient('localhost:11211', { + logger: memJsFormattedLogger, + }) +} catch (e) { + console.error(`Could not connect to cache: ${e.message}`) +} + +export const { cache, cacheFindMany } = createCache(client, { + logger, + timeout: 500, +}) +``` + +When the time comes, you can replace the hardcoded `localhost:11211` with an ENV var that can be set per-environment. + +#### Logging + +You'll see two different instances of passing `logger` as arguments here. The first: + +```js +client = new MemcachedClient(process.env.CACHE_SERVER, { + logger: memJsFormattedLogger, +}) +``` + +passes it to the `MemcachedClient` initializer, which passes it on to the MemJS library underneath so that it (MemJS) can report errors. `memJsFormattedLogger` just wraps the Redwood logger call in another function, which is the format expected by the MemJS library. + +The second usage of the logger argument: + +```js +export const { cache, cacheFindMany } = createCache(client, { + logger, + timeout: 500 +}) +``` + +is passing it to Redwood's own service cache code, so that it can log cache hits, misses, or errors. + +#### Options + +There are several options you can pass to the `createCache()` call: + +* `logger`: an instance of the Redwood logger. Defaults to `null`, but if you want any feedback about what the cache is doing, make sure to set this! +* `timeout`: how long to wait for the cache server to respond during a get/set before giving up and just executing the function containing what you want to cache and returning the result directly. Defaults to `500` milliseconds. +* `prefix`: a global cache key prefix. Defaults to `null`. +* `fields`: an object that maps the model field names for the `id` and `updatedAt` fields if your database has another name for them. For example: `fields: { id: 'post_id', updatedAt: 'updated_at' }`. Even if only one of your names is different, you need to provide both properties to this option. Defaults to `{ id: 'id', updatedAt: 'updatedAt' }` + +### `cache()` + +Use this function when you want to cache some data, optionally including a number of seconds before it expires: + +```js +// cache forever +const post = ({ id }) => { + return cache(`posts`, () => { + db.post.findMany() + }) +} + +// cache for 1 hour +const post = ({ id }) => { + return cache(`posts`, () => { + db.post.findMany() + }, { expires: 3600 }) +} +``` + +Note that a key can be a string or an array: + +```js +const post = ({ id }) => { + return cache(`posts-${id}-${updatedAt.getTime()}`, () => { + db.post.findMany() + }) +} + +// or + +const post = ({ id }) => { + return cache(['posts', id, updatedAt.getTime()], () => { + db.post.findMany() + }) +} +``` + +:::info + +`cache()` returns a Promise so you'll want to `await` it if you need the data for further processing in your service. If you're only using your service as a GraphQL resolver then you can just return `cache()` directly. + +::: + +### `cacheFindMany()` + +Use this function if you want to cache the results of a `findMany()` call from Prisma, but only until one or more of the records in the set is updated. This is sort of a best of both worlds cache scenario where you can cache as much data as possible, but also expire and re-cache as soon as any piece of it changes, without going through every record manually to see if it's changed: whenever *any* record changes the cache will be discarded. + +This function will always execute a `findFirst()` query to get the latest record that's changed, then use its `id` and `updatedAt` timestamp as the cache key for the full query. This means you'll always incur the overhead of a single DB call, but not the bigger `findMany()` unless something has changed. Note you still need to include a cache key prefix: + +```js +const post = ({ id }) => { + return cacheFindMany(`users`, db.user) +} +``` + +The above is the simplest usage example. If you need to pass a `where`, or any other object that `findMany()` accepts, include a `conditions` key in an object as the third argument: + +```js +const post = ({ id }) => { + return cacheFindMany(`users`, db.user, { + conditions: { where: { roles: 'admin' } } + }) +} +``` + +This is functionally equivalent to the following: + +```js +const latest = await db.user.findFirst({ + where: { roles: 'admin' } }, + orderBy: { updatedAt: 'desc' }, + select: { id: true, updatedAt: true +}) + +return cache(`posts-${latest.id}-${latest.updatedAt.getTime()}`, () => { + db.post.findMany({ where: { roles: 'admin' } }) +}) +``` + +If you also want to pass an `expires` option, do it in the same object as `conditions`: + +```js +const post = ({ id }) => { + return cacheFindMany( + `users`, db.user, { + conditions: { where: { roles: 'admin' } }, + expires: 86400 + } + ) +} +``` + +:::info + +`cacheFindMany()` returns a Promise so you'll want to `await` it if you need the data for further processing in your service. If you're only using your service as a GraphQL resolver than you can just return the Promise. + +::: + + +### Testing what you cache +We wouldn't just give you all of these caching APIs and not show you how to test it right? You'll find all the details in the [Caching section in the testing doc](testing.md#testing-caching). + +### Creating Your Own Client + +If Memcached or Redis don't serve your needs, you can create your own client adapter. In the Redwood codebase take a look at `packages/api/src/cache/clients` as a reference for writing your own. The interface is extremely simple: + +* Extend from the `BaseClient` class. +* A constructor that takes whatever arguments you want, passing them through to the client's initialization code. +* A `get()` function that accepts a `key` argument and returns the data from the cache if found, otherwise `null`. Note that in the Memcached and Redis clients the value returned is first run through `JSON.parse()` but if your cache client supports native JS objects then you wouldn't need to do this. +* A `set()` function that accepts a string `key`, the `value` to be cached, and an optional `options` object containing at least an `expires` key. Note that `value` can be a real JavaScript objects at this point, but in Memcached and Redis the value is run through `JSON.stringify()` before being sent to the client library. You may or may not need to do the same thing, depending on what your cache client supports. diff --git a/docs/docs/testing.md b/docs/docs/testing.md index c9aa7a5e4573..4ef02ebe3c82 100644 --- a/docs/docs/testing.md +++ b/docs/docs/testing.md @@ -1744,6 +1744,222 @@ Luckily, RedwoodJS has several api testing utilities to make [testing functions ## Testing GraphQL Directives Please refer to the [Directives documentation](./directives.md) for details on how to write Redwood [Validator](./directives.md#writing-validator-tests) or [Transformer](./directives.md#writing-transformer-tests) Directives tests. + + +## Testing Caching +If you're using Redwood's [caching](services#caching), we provide a handful of utilities and patterns to help you test this too! + +Let's say you have a service where you cache the result of products, and individual products: + +```ts +export const listProducts: QueryResolvers['listProducts'] = () => { + // highlight-next-line + return cacheFindMany('products-list', db.product, { + expires: 3600, + }) +} + +export const product: QueryResolvers['product'] = async ({ id }) => { + // highlight-next-line + return cache( + `cached-product-${id}`, + () => + db.product.findUnique({ + where: { id }, + }), + { expires: 3600 } + ) +} +``` + +With this code, we'll be caching an array of products (from the find many), and individual products that get queried too. + + +:::tip +It's important to note that when you write scenario or unit tests, it will use the `InMemoryClient`. + +The InMemoryClient has a few extra features to help with testing. + +1. Allows you to call `cacheClient.clear()` so each of your tests have a fresh cache state +2. Allows you to get all its contents (without cache-keys) with the `cacheClient.contents` getter +::: + + +There's a few different things you may want to test, but let's start with the basics. + +In your test let's import your cache client and clear after each test: + + +```ts +import type { InMemoryClient } from '@redwoodjs/api/cache' +import { client } from 'src/lib/cache' + +// For TypeScript users +const testCacheClient = client as InMemoryClient + +describe('products', () => { + // highlight-start + afterEach(() => { + testCacheClient.clear() + }) + // highlight-end + //.... +}) +``` + +### The `toHaveCached` matcher +We have a custom Jest matcher included in Redwood to make things a little easier. To use it simply add an import to the top of your test file: + +```ts +// highlight-next-line +import '@redwoodjs/testing/cache' +// ^^ make `.toHaveCached` available +``` + +The `toHaveCached` matcher can take three forms: + +`expect(testCacheClient)` +1. `.toHaveCached(expectedData)` - check for an exact match of the data, regardless of the key +2. `.toHaveCached('expected-key', expectedData)` - check that the data is cached in the key you supply +3. `.toHaveCached(/key-regex.*/, expectedData)` - check that data is cached in a key that matches the regex supplied + + +Let's see these in action now: + +```ts +scenario('returns a single product', async (scenario: StandardScenario) => { + await product({ id: scenario.product.three.id }) + +// Pattern 1: Only check that the data is present in the cache + expect(testCacheClient).toHaveCached(scenario.product.three) + +// Pattern 2: Check that data is cached, at a specific key + expect(testCacheClient).toHaveCached( + `cached-product-${scenario.product.three.id}`, + scenario.product.three + ) + +// Pattern 3: Check that data is cached, in a key matching the regex + expect(testCacheClient).toHaveCached( + /cached-.*/, + scenario.product.three + ) +``` + + +:::info Serialized Objects in Cache +Remember that the cache only ever contains serialized objects. So if you passed an object like this: +```js +{ + id: 5, + published: new Date('12/10/1995') +} + +``` + +The published key will be serialized and stored as a string. To make testing easier for you, we serialize the object you are passing when you use the `toHaveCached` matcher, before we compare it against the value in the cache. +::: + +### Partial Matching +It can be a little tedious to check that every key in the object you are looking for matches. This is especially true if you have autogenerated values such as `updatedAt` and `cuid` IDs. + +To help with this, we've provided a helper for partial matching! + +```ts +// highlight-next-line +import { partialMatch } from '@redwoodjs/testing/cache' + +scenario('returns all products', async (scenario: StandardScenario) => { + await products() + + // Partial match using the toHaveCached, if you supply a key + expect(testCacheClient).toHaveCached( + /cached-products.*/, + // highlight-next-line + partialMatch([{ name: 'LS50', brand: 'KEF' }]) + ) + + // Or you can use the .contents getter + expect(testCacheClient.contents).toContainEqual( + // check that an array contains an object matching + // highlight-next-line + partialMatch([{ name: 'LS50', brand: 'KEF' }]) + ) +} + +scenario('finds a single product', () = { + await product({id: 5}) + + // You can also check for a partial match of an object + expect(testCacheClient).toHaveCached( + /cached-.*/, + // highlight-start + partialMatch({ + name: 'LS50', + brand: 'KEF' + }) + ) + // highlight-end +}) +``` + +Partial match is just syntactic sugar—underneath it uses Jest's `expect.objectContaining` and `expect.arrayContaining`. + +The `partialMatch` helper takes two forms of arguments: + +- If you supply an object, you are expecting a partial match of that object +- If you supply an array of objects, you are expecting an array containing a partial match of each of the objects + + +:::tip +Note that you cannot use `partialMatch` with toHaveCached without supplying a key! + +```ts +// 🛑 Will never pass! +expect(testCacheClient).toHaveCached(partialMatch({name: 'LS50'})) +``` + +For partial matches, you either have to supply a key to `toHaveCached` or use the `cacheClient.contents` helper. +::: + + +### Strict Matching + +If you'd like stricter checking (i.e. you do not want helpers to automatically serialize/deserialize your _expected_ value), you can use the `.contents` getter in test cache client. Note that the `.contents` helper will still de-serialize the values in your cache (to make it easier to compare), just not the expected value. + +For example: + +```ts + +const expectedValue = { + // Note that this is a date 👇 + publishDate: new Date('12/10/1988'), + title: 'A book from the eighties', + id: 1988 +} + +// ✅ will pass, because we will serialize the publishedDate for you +expect(testCacheClient).toHaveCached(expectedValue) + + +// 🛑 won't pass, because publishDate in cache is a string, but you supplied a Date object +expect(testCacheClient.contents).toContainEqual(expectedValue) + +// ✅ will pass, because you serialized the date +expect(testCacheClient.contents).toContainEqual({ + ...expectedValue, + publishDate: expectedValue.publishDate.toISOString() +}) + +// And if you wanted to view the raw contents of the cache +console.log(testCacheClient.storage) +``` + +This is mainly helpful when you are testing for a very specific value, or have edgecases in how the serialization/deserialization works in the cache. + + + + ## Wrapping Up So that's the world of testing according to Redwood. Did we miss anything? Can we make it even more awesome? Stop by [the community](https://community.redwoodjs.com) and ask questions, or if you've thought of a way to make this doc even better then [open a PR](https://github.com/redwoodjs/redwoodjs.com/pulls). diff --git a/packages/api/cache/index.js b/packages/api/cache/index.js new file mode 100644 index 000000000000..201b6305f692 --- /dev/null +++ b/packages/api/cache/index.js @@ -0,0 +1,2 @@ +/* eslint-env es6, commonjs */ +module.exports = require('../dist/cache/index') diff --git a/packages/api/cache/package.json b/packages/api/cache/package.json new file mode 100644 index 000000000000..5e13515fce56 --- /dev/null +++ b/packages/api/cache/package.json @@ -0,0 +1,4 @@ +{ + "main": "./index.js", + "types": "../dist/cache/index.d.ts" +} diff --git a/packages/api/package.json b/packages/api/package.json index 828d98c2ccbc..916cbeefec40 100644 --- a/packages/api/package.json +++ b/packages/api/package.json @@ -17,6 +17,7 @@ }, "files": [ "dist", + "cache", "logger", "webhooks" ], @@ -55,11 +56,14 @@ "@types/crypto-js": "4.1.1", "@types/jsonwebtoken": "8.5.9", "@types/md5": "2.3.2", + "@types/memjs": "1", "@types/pascalcase": "1.0.1", "@types/split2": "3.2.1", "@types/uuid": "8.3.4", "aws-lambda": "1.0.7", "jest": "29.3.1", + "memjs": "1.3.0", + "redis": "4.2.0", "split2": "4.1.0", "typescript": "4.7.4" }, diff --git a/packages/api/src/cache/__tests__/cache.test.ts b/packages/api/src/cache/__tests__/cache.test.ts new file mode 100644 index 000000000000..c12ba794dfe9 --- /dev/null +++ b/packages/api/src/cache/__tests__/cache.test.ts @@ -0,0 +1,30 @@ +import InMemoryClient from '../clients/InMemoryClient' +import { createCache } from '../index' + +describe('cache', () => { + it('adds a missing key to the cache', async () => { + const client = new InMemoryClient() + const { cache } = createCache(client) + + const result = await cache('test', () => { + return { foo: 'bar' } + }) + + expect(result).toEqual({ foo: 'bar' }) + expect(client.storage.test.value).toEqual(JSON.stringify({ foo: 'bar' })) + }) + + it('finds an existing key in the cache', async () => { + const client = new InMemoryClient({ + test: { expires: 1977175194415, value: '{"foo":"bar"}' }, + }) + const { cache } = createCache(client) + + const result = await cache('test', () => { + return { bar: 'baz' } + }) + + // returns existing cached value, not the one that was just set + expect(result).toEqual({ foo: 'bar' }) + }) +}) diff --git a/packages/api/src/cache/__tests__/cacheFindMany.test.ts b/packages/api/src/cache/__tests__/cacheFindMany.test.ts new file mode 100644 index 000000000000..93e1ed74f528 --- /dev/null +++ b/packages/api/src/cache/__tests__/cacheFindMany.test.ts @@ -0,0 +1,98 @@ +import { PrismaClient } from '@prisma/client' + +import InMemoryClient from '../clients/InMemoryClient' +import { createCache } from '../index' + +const mockFindFirst = jest.fn() +const mockFindMany = jest.fn() + +jest.mock('@prisma/client', () => ({ + PrismaClient: jest.fn(() => ({ + user: { + findFirst: mockFindFirst, + findMany: mockFindMany, + }, + })), +})) + +describe('cacheFindMany', () => { + afterEach(() => { + jest.restoreAllMocks() + }) + + it('adds the collection to the cache based on latest updated user', async () => { + const now = new Date() + + const user = { + id: 1, + email: 'rob@redwoodjs.com', + updatedAt: now, + } + mockFindFirst.mockImplementation(() => user) + mockFindMany.mockImplementation(() => [user]) + + const client = new InMemoryClient() + const { cacheFindMany } = createCache(client) + const spy = jest.spyOn(client, 'set') + + await cacheFindMany('test', PrismaClient().user) + + expect(spy).toHaveBeenCalled() + expect(client.storage[`test-1-${now.getTime()}`].value).toEqual( + JSON.stringify([user]) + ) + }) + + it('adds a new collection if a record has been updated', async () => { + const now = new Date() + const user = { + id: 1, + email: 'rob@redwoodjs.com', + updatedAt: now, + } + const client = new InMemoryClient({ + [`test-1-${now.getTime()}`]: { + expires: 1977175194415, + value: JSON.stringify([user]), + }, + }) + + // set mock to return user that's been updated in the future, rather than + // the timestamp that's been cached already + const future = new Date() + future.setSeconds(future.getSeconds() + 1000) + user.updatedAt = future + mockFindFirst.mockImplementation(() => user) + mockFindMany.mockImplementation(() => [user]) + + const { cacheFindMany } = createCache(client) + const spy = jest.spyOn(client, 'set') + + await cacheFindMany('test', PrismaClient().user) + + expect(spy).toHaveBeenCalled() + // the `now` cache still exists + expect( + JSON.parse(client.storage[`test-1-${now.getTime()}`].value)[0].id + ).toEqual(1) + // the `future` cache should have been created + expect(client.storage[`test-1-${future.getTime()}`].value).toEqual( + JSON.stringify([user]) + ) + }) + + it('skips caching and just runs the findMany() if there are no records', async () => { + const client = new InMemoryClient() + mockFindFirst.mockImplementation(() => null) + mockFindMany.mockImplementation(() => []) + const { cacheFindMany } = createCache(client) + const getSpy = jest.spyOn(client, 'get') + const setSpy = jest.spyOn(client, 'set') + + const result = await cacheFindMany('test', PrismaClient().user) + + expect(result).toEqual([]) + expect(getSpy).not.toHaveBeenCalled() + expect(setSpy).not.toHaveBeenCalled() + }) +}) diff --git a/packages/api/src/cache/__tests__/disconnect.test.ts b/packages/api/src/cache/__tests__/disconnect.test.ts new file mode 100644 index 000000000000..c7998ffc1ed0 --- /dev/null +++ b/packages/api/src/cache/__tests__/disconnect.test.ts @@ -0,0 +1,26 @@ +import InMemoryClient from '../clients/InMemoryClient' +import { CacheTimeoutError } from '../errors' +import { createCache } from '../index' + +describe('client.disconnect', () => { + beforeEach(() => { + jest.clearAllMocks() + }) + + it('attempts to disconnect on timeout error', async () => { + const client = new InMemoryClient() + const { cache } = createCache(client) + const getSpy = jest.spyOn(client, 'get') + getSpy.mockImplementation(() => { + throw new CacheTimeoutError() + }) + const disconnectSpy = jest.spyOn(client, 'disconnect') + + await cache('test', () => { + return { bar: 'baz' } + }) + + // returns existing cached value, not the one that was just set + expect(disconnectSpy).toHaveBeenCalled() + }) +}) diff --git a/packages/api/src/cache/__tests__/shared.test.ts b/packages/api/src/cache/__tests__/shared.test.ts new file mode 100644 index 000000000000..2e27679c6b0c --- /dev/null +++ b/packages/api/src/cache/__tests__/shared.test.ts @@ -0,0 +1,26 @@ +import { formatCacheKey } from '../index' + +describe('formatCacheKey', () => { + it('creates a key from a string', () => { + expect(formatCacheKey('foobar')).toEqual('foobar') + expect(formatCacheKey('foo-bar')).toEqual('foo-bar') + }) + + it('creates a key from an array', () => { + expect(formatCacheKey(['foo'])).toEqual('foo') + expect(formatCacheKey(['foo', 'bar'])).toEqual('foo-bar') + }) + + it('appends a prefix', () => { + expect(formatCacheKey('bar', 'foo')).toEqual('foo-bar') + expect(formatCacheKey(['bar'], 'foo')).toEqual('foo-bar') + expect(formatCacheKey(['bar', 'baz'], 'foo')).toEqual('foo-bar-baz') + }) + + it('does not appent the prefix more than once', () => { + expect(formatCacheKey('foo-bar', 'foo')).toEqual('foo-bar') + expect(formatCacheKey(['foo', 'bar'], 'foo')).toEqual('foo-bar') + // needs a - to match against the prefix + expect(formatCacheKey('foobar', 'foo')).toEqual('foo-foobar') + }) +}) diff --git a/packages/api/src/cache/clients/BaseClient.ts b/packages/api/src/cache/clients/BaseClient.ts new file mode 100644 index 000000000000..5246a9a74bad --- /dev/null +++ b/packages/api/src/cache/clients/BaseClient.ts @@ -0,0 +1,19 @@ +export default abstract class BaseClient { + constructor() {} + + // if your client won't automatically reconnect, implement this function + // to do it manually + disconnect?(): void | Promise + + abstract connect(): void | Promise + + // Gets a value from the cache + abstract get(key: string): any + + // Sets a value in the cache. The return value will not be used. + abstract set( + key: string, + value: unknown, + options: { expires?: number } + ): Promise | any // types are tightened in the child classes +} diff --git a/packages/api/src/cache/clients/InMemoryClient.ts b/packages/api/src/cache/clients/InMemoryClient.ts new file mode 100644 index 000000000000..441e4794b6f5 --- /dev/null +++ b/packages/api/src/cache/clients/InMemoryClient.ts @@ -0,0 +1,75 @@ +// Simple in-memory cache client for testing. NOT RECOMMENDED FOR PRODUCTION + +import BaseClient from './BaseClient' + +type CacheOptions = { + expires?: number +} + +export default class InMemoryClient extends BaseClient { + storage: Record + + // initialize with pre-cached data if needed + constructor(data = {}) { + super() + this.storage = data + } + + /** + * Special function for testing, only available in InMemoryClient + * + * Returns deserialized content of cache as an array of values (without cache keys) + * + */ + get contents() { + return Object.values(this.storage).map((cacheObj) => + JSON.parse(cacheObj.value) + ) + } + + // Not needed for InMemoryClient + async disconnect() {} + async connect() {} + + async get(key: string) { + const now = new Date() + if (this.storage[key] && this.storage[key].expires > now.getTime()) { + return JSON.parse(this.storage[key].value) + } else { + delete this.storage[key] + return null + } + } + + // stores expiration dates as epoch + async set(key: string, value: unknown, options: CacheOptions = {}) { + const now = new Date() + now.setSeconds(now.getSeconds() + (options?.expires || 315360000)) + const data = { expires: now.getTime(), value: JSON.stringify(value) } + + this.storage[key] = data + + return true + } + + /** + * Special functions for testing, only available in InMemoryClient + */ + async clear() { + this.storage = {} + } + + cacheKeyForValue(value: any) { + for (const [cacheKey, cacheObj] of Object.entries(this.storage)) { + if (cacheObj.value === JSON.stringify(value)) { + return cacheKey + } + } + + return null + } + + isCached(value: any) { + return !!this.cacheKeyForValue(value) + } +} diff --git a/packages/api/src/cache/clients/MemcachedClient.ts b/packages/api/src/cache/clients/MemcachedClient.ts new file mode 100644 index 000000000000..2fb5cc1641bd --- /dev/null +++ b/packages/api/src/cache/clients/MemcachedClient.ts @@ -0,0 +1,47 @@ +import type { Client as ClientType, ClientOptions, ServerOptions } from 'memjs' + +import BaseClient from './BaseClient' + +export default class MemcachedClient extends BaseClient { + client?: ClientType | null + servers + options + + constructor(servers: string, options?: ClientOptions & ServerOptions) { + super() + this.servers = servers + this.options = options + } + + async connect() { + const { Client: MemCachedClient } = await import('memjs') + this.client = MemCachedClient.create(this.servers, this.options) + } + + async disconnect() { + this.client?.close() + this.client = null + } + + async get(key: string) { + if (!this.client) { + await this.connect() + } + + const result = await this.client?.get(key) + + if (result?.value) { + return JSON.parse(result.value.toString()) + } else { + return result?.value + } + } + + async set(key: string, value: unknown, options: { expires?: number }) { + if (!this.client) { + await this.connect() + } + + return this.client?.set(key, JSON.stringify(value), options) + } +} diff --git a/packages/api/src/cache/clients/RedisClient.ts b/packages/api/src/cache/clients/RedisClient.ts new file mode 100644 index 000000000000..473099c71dd5 --- /dev/null +++ b/packages/api/src/cache/clients/RedisClient.ts @@ -0,0 +1,69 @@ +import type { RedisClientType } from '@redis/client' +import type { RedisClientOptions } from 'redis' + +import type { Logger } from '../../logger' + +import BaseClient from './BaseClient' + +interface SetOptions { + EX?: number +} + +type LoggerOptions = { + logger?: Logger +} + +export default class RedisClient extends BaseClient { + client?: RedisClientType | null + logger?: Logger + redisOptions?: RedisClientOptions + + constructor(options: RedisClientOptions & LoggerOptions) { + const { logger, ...redisOptions } = options + super() + + this.logger = logger + this.redisOptions = redisOptions + } + + async connect() { + // async import to make sure Redis isn't imported for MemCache + const { createClient } = await import('redis') + + // NOTE: type in redis client does not match the return type of createClient + this.client = createClient(this.redisOptions) as RedisClientType + this.client.on( + 'error', + (err: Error) => this.logger?.error(err) || console.error(err) + ) + + return this.client.connect() + } + + // @NOTE: disconnect intentionally not implemented for Redis + // Because node-redis recovers gracefully from connection loss + + async get(key: string) { + if (!this.client) { + await this.connect() + } + + const result = await this.client?.get(key) + + return result ? JSON.parse(result) : null + } + + async set(key: string, value: unknown, options: { expires?: number }) { + const setOptions: SetOptions = {} + + if (!this.client) { + await this.connect() + } + + if (options.expires) { + setOptions.EX = options.expires + } + + return this.client?.set(key, JSON.stringify(value), setOptions) + } +} diff --git a/packages/api/src/cache/errors.ts b/packages/api/src/cache/errors.ts new file mode 100644 index 000000000000..babbf570dd71 --- /dev/null +++ b/packages/api/src/cache/errors.ts @@ -0,0 +1,6 @@ +export class CacheTimeoutError extends Error { + constructor() { + super('Timed out waiting for response from the cache server') + this.name = 'CacheTimeoutError' + } +} diff --git a/packages/api/src/cache/index.ts b/packages/api/src/cache/index.ts new file mode 100644 index 000000000000..0e9d6ce96ab0 --- /dev/null +++ b/packages/api/src/cache/index.ts @@ -0,0 +1,192 @@ +import type { Logger } from '../logger' + +import BaseClient from './clients/BaseClient' +import { CacheTimeoutError } from './errors' + +export { default as MemcachedClient } from './clients/MemcachedClient' +export { default as RedisClient } from './clients/RedisClient' +export { default as InMemoryClient } from './clients/InMemoryClient' + +export interface CreateCacheOptions { + logger?: Logger + timeout?: number + prefix?: string + fields?: { + id: string + updatedAt: string + } +} + +export interface CacheOptions { + expires?: number +} + +export interface CacheFindManyOptions< + TFindManyArgs extends Record +> extends CacheOptions { + conditions?: TFindManyArgs +} + +export type CacheKey = string | Array +export type LatestQuery = Record + +type GenericDelegate = { + findMany: (...args: any) => any + findFirst: (...args: any) => any +} + +const DEFAULT_LATEST_FIELDS = { id: 'id', updatedAt: 'updatedAt' } + +const wait = (ms: number) => { + return new Promise((resolve) => setTimeout(resolve, ms)) +} + +export const cacheKeySeparator = '-' + +export const formatCacheKey = (key: CacheKey, prefix?: string) => { + let output + + if (Array.isArray(key)) { + output = key.join(cacheKeySeparator) + } else { + output = key + } + + // don't prefix if already prefixed + if ( + prefix && + !output.toString().match(new RegExp('^' + prefix + cacheKeySeparator)) + ) { + output = `${prefix}${cacheKeySeparator}${output}` + } + + return output +} + +const serialize = (input: any) => { + return JSON.parse(JSON.stringify(input)) +} + +export const createCache = ( + cacheClient: BaseClient, + options?: CreateCacheOptions +) => { + const client = cacheClient + const logger = options?.logger + const timeout = options?.timeout || 1000 + const prefix = options?.prefix + const fields = options?.fields || DEFAULT_LATEST_FIELDS + + const cache = async ( + key: CacheKey, + input: () => TResult | Promise, + options?: CacheOptions + ): Promise => { + const cacheKey = formatCacheKey(key, prefix) + + try { + // some client lib timeouts are flaky if the server actually goes away + // (MemJS) so we'll implement our own here just in case + const result = await Promise.race([ + client.get(cacheKey), + wait(timeout).then(() => { + throw new CacheTimeoutError() + }), + ]) + + if (result) { + logger?.debug(`[Cache] HIT key '${cacheKey}'`) + return result + } + } catch (e: any) { + logger?.error(`[Cache] Error GET '${cacheKey}': ${e.message}`) + + // If client implements a reconnect() function, try it now + if (e instanceof CacheTimeoutError && client.disconnect) { + logger?.error(`[Cache] Disconnecting current instance...`) + + await client.disconnect() + } + // stringify and parse to match what happens inside cache clients + return serialize(await input()) + } + + // data wasn't found, SET it instead + let data + + try { + data = await input() + + await Promise.race([ + client.set(cacheKey, data, options || {}), + wait(timeout).then(() => { + throw new CacheTimeoutError() + }), + ]) + + logger?.debug( + `[Cache] MISS '${cacheKey}', SET ${JSON.stringify(data).length} bytes` + ) + return serialize(data) + } catch (e: any) { + logger?.error(`[Cache] Error SET '${cacheKey}': ${e.message}`) + return serialize(data || (await input())) + } + } + + const cacheFindMany = async ( + key: CacheKey, + model: TDelegate, + options: CacheFindManyOptions[0]> = {} + ) => { + const { conditions, ...rest } = options + const cacheKey = formatCacheKey(key, prefix) + let latest, latestCacheKey + + // @ts-expect-error - Error object is not exported until `prisma generate` + const { PrismaClientValidationError } = await import('@prisma/client') + + // take the conditions from the query that's going to be cached, and only + // return the latest record (based on `updatedAt`) from that set of + // records, using its data as the cache key + try { + latest = await model.findFirst({ + ...conditions, + orderBy: { [fields.updatedAt]: 'desc' }, + select: { [fields.id]: true, [fields.updatedAt]: true }, + }) + } catch (e: any) { + if (e instanceof PrismaClientValidationError) { + logger?.error( + `[Cache] cacheFindMany error: model does not contain \`${fields.id}\` or \`${fields.updatedAt}\` fields` + ) + } else { + logger?.error(`[Cache] cacheFindMany error: ${e.message}`) + } + + return serialize(await model.findMany(conditions)) + } + + // there may not have been any records returned, in which case we can't + // create the key so just return the query + if (latest) { + latestCacheKey = `${cacheKey}${cacheKeySeparator}${ + latest.id + }${cacheKeySeparator}${latest[fields.updatedAt].getTime()}` + } else { + logger?.debug( + `[Cache] cacheFindMany: No data to cache for key \`${key}\`, skipping` + ) + + return serialize(await model.findMany(conditions)) + } + + // everything looks good, cache() this with the computed key + return cache(latestCacheKey, () => model.findMany(conditions), rest) + } + + return { + cache, + cacheFindMany, + } +} diff --git a/packages/auth-providers-setup/src/dbAuth/setupData.ts b/packages/auth-providers-setup/src/dbAuth/setupData.ts index b2749231fa5e..284ee1548927 100644 --- a/packages/auth-providers-setup/src/dbAuth/setupData.ts +++ b/packages/auth-providers-setup/src/dbAuth/setupData.ts @@ -1,9 +1,8 @@ -import fs from 'fs' import path from 'path' import password from 'secure-random-password' -import { getPaths, colors } from '@redwoodjs/cli-helpers' +import { getPaths, colors, addEnvVarTask } from '@redwoodjs/cli-helpers' export const libPath = getPaths().api.lib.replace(getPaths().base, '') export const functionsPath = getPaths().api.functions.replace( @@ -11,28 +10,16 @@ export const functionsPath = getPaths().api.functions.replace( '' ) -export const extraTask = { - title: 'Adding SESSION_SECRET...', - task: () => { - const envPath = path.join(getPaths().base, '.env') - const secret = password.randomPassword({ - length: 64, - characters: [password.lower, password.upper, password.digits], - }) - const content = [ - '# Used to encrypt/decrypt session cookies. Change this value and re-deploy to log out all users of your app at once.', - `SESSION_SECRET=${secret}`, - '', - ] - let envFile = '' +const secret = password.randomPassword({ + length: 64, + characters: [password.lower, password.upper, password.digits], +}) - if (fs.existsSync(envPath)) { - envFile = fs.readFileSync(envPath).toString() + '\n' - } - - fs.writeFileSync(envPath, envFile + content.join('\n')) - }, -} +export const extraTask = addEnvVarTask( + 'SESSION_SECRET', + secret, + 'Used to encrypt/decrypt session cookies. Change this value and re-deploy to log out all users of your app at once.' +) // any notes to print out when the job is done export const notes = [ diff --git a/packages/cli-helpers/src/lib/project.ts b/packages/cli-helpers/src/lib/project.ts index 037707d507a3..bdcbb1d0e052 100644 --- a/packages/cli-helpers/src/lib/project.ts +++ b/packages/cli-helpers/src/lib/project.ts @@ -30,3 +30,20 @@ export const getInstalledRedwoodVersion = () => { process.exit(1) } } + +export const addEnvVarTask = (name: string, value: string, comment: string) => { + return { + title: `Adding ${name} var to .env...`, + task: () => { + const envPath = path.join(getPaths().base, '.env') + const content = [comment && `# ${comment}`, `${name}=${value}`, ''].flat() + let envFile = '' + + if (fs.existsSync(envPath)) { + envFile = fs.readFileSync(envPath).toString() + '\n' + } + + fs.writeFileSync(envPath, envFile + content.join('\n')) + }, + } +} diff --git a/packages/cli/src/commands/setup/cache/cache.js b/packages/cli/src/commands/setup/cache/cache.js new file mode 100644 index 000000000000..866ab6b77dfb --- /dev/null +++ b/packages/cli/src/commands/setup/cache/cache.js @@ -0,0 +1,100 @@ +import fs from 'fs' +import path from 'path' + +import chalk from 'chalk' +import { Listr } from 'listr2' +import terminalLink from 'terminal-link' + +import { addEnvVarTask } from '@redwoodjs/cli-helpers' +import { errorTelemetry } from '@redwoodjs/telemetry' + +import { addPackagesTask, getPaths, writeFile } from '../../../lib' +import c from '../../../lib/colors' +import { isTypeScriptProject } from '../../../lib/project' + +const CLIENT_PACKAGE_MAP = { + memcached: 'memjs', + redis: 'redis', +} + +const CLIENT_HOST_MAP = { + memcached: 'localhost:11211', + redis: 'redis://localhost:6379', +} + +export const command = 'cache ' + +export const description = 'Sets up an init file for service caching' + +export const builder = (yargs) => { + yargs + .positional('client', { + choices: ['memcached', 'redis'], + description: 'Cache client', + type: 'string', + required: true, + }) + .option('force', { + alias: 'f', + default: false, + description: 'Overwrite existing cache.js file', + type: 'boolean', + }) + .epilogue( + `Also see the ${terminalLink( + 'Redwood CLI Reference', + 'https://redwoodjs.com/docs/cli-commands#setup-cache' + )}` + ) +} + +export const handler = async ({ client, force }) => { + const extension = isTypeScriptProject ? 'ts' : 'js' + + const tasks = new Listr([ + addPackagesTask({ + packages: [CLIENT_PACKAGE_MAP[client]], + side: 'api', + }), + { + title: `Writing api/src/lib/cache.js`, + task: () => { + const template = fs + .readFileSync( + path.join(__dirname, 'templates', `${client}.ts.template`) + ) + .toString() + + return writeFile( + path.join(getPaths().api.lib, `cache.${extension}`), + template, + { + overwriteExisting: force, + } + ) + }, + }, + addEnvVarTask( + 'CACHE_HOST', + CLIENT_HOST_MAP[client], + `Where your ${client} server lives for service caching` + ), + { + title: 'One more thing...', + task: (_ctx, task) => { + task.title = `One more thing...\n + ${c.green('Check out the Service Cache docs for config and usage:')} + ${chalk.hex('#e8e8e8')('https://redwoodjs.com/docs/services#caching')} + ` + }, + }, + ]) + + try { + await tasks.run() + } catch (e) { + errorTelemetry(process.argv, e.message) + console.error(c.error(e.message)) + process.exit(e?.exitCode || 1) + } +} diff --git a/packages/cli/src/commands/setup/cache/templates/memcached.ts.template b/packages/cli/src/commands/setup/cache/templates/memcached.ts.template new file mode 100644 index 000000000000..adbf786ea422 --- /dev/null +++ b/packages/cli/src/commands/setup/cache/templates/memcached.ts.template @@ -0,0 +1,30 @@ +import { + createCache, + InMemoryClient, + MemcachedClient, +} from '@redwoodjs/api/cache' + +import { logger } from './logger' + +const memJsFormattedLogger = { + log: (msg: string) => logger.error(msg), +} + +export let client: InMemoryClient | MemcachedClient + +if (process.env.NODE_ENV === 'test') { + client = new InMemoryClient() +} else { + try { + client = new MemcachedClient(process.env.CACHE_HOST, { + logger: memJsFormattedLogger, + }) + } catch (e) { + logger.error(`Could not connect to cache: ${e.message}`) + } +} + +export const { cache, cacheFindMany } = createCache(client, { + logger, + timeout: 500, +}) diff --git a/packages/cli/src/commands/setup/cache/templates/redis.ts.template b/packages/cli/src/commands/setup/cache/templates/redis.ts.template new file mode 100644 index 000000000000..a69187279325 --- /dev/null +++ b/packages/cli/src/commands/setup/cache/templates/redis.ts.template @@ -0,0 +1,24 @@ +import { + createCache, + InMemoryClient, + RedisClient, +} from '@redwoodjs/api/cache' + +import { logger } from './logger' + +export let client: InMemoryClient | RedisClient + +if (process.env.NODE_ENV === 'test') { + client = new InMemoryClient() +} else { + try { + client = new RedisClient({ url: process.env.CACHE_HOST, logger }) + } catch (e) { + logger.error(`Could not connect to cache: ${e.message}`) + } +} + +export const { cache, cacheFindMany } = createCache(client, { + logger, + timeout: 500, +}) diff --git a/packages/cli/src/commands/setup/deploy/helpers/index.js b/packages/cli/src/commands/setup/deploy/helpers/index.js index 5ef215884b27..4d7b9427d155 100644 --- a/packages/cli/src/commands/setup/deploy/helpers/index.js +++ b/packages/cli/src/commands/setup/deploy/helpers/index.js @@ -1,4 +1,3 @@ -import { execSync } from 'child_process' import fs from 'fs' import path from 'path' @@ -6,11 +5,7 @@ import boxen from 'boxen' import execa from 'execa' import { Listr } from 'listr2' -import { - getInstalledRedwoodVersion, - getPaths, - writeFilesTask, -} from '../../../../lib' +import { getPaths, writeFilesTask } from '../../../../lib' const REDWOOD_TOML_PATH = path.join(getPaths().base, 'redwood.toml') @@ -77,66 +72,6 @@ export const preRequisiteCheckTask = (preRequisites) => { } } -/** - * - * Use this util to install dependencies on a user's Redwood app - * - * @example addPackagesTask({ - * packages: ['fs-extra', 'somePackage@2.1.0'], - * side: 'api', // <-- leave empty for project root - * devDependency: true - * }) - */ -export const addPackagesTask = ({ - packages, - side = 'project', - devDependency = false, -}) => { - const packagesWithSameRWVersion = packages.map((pkg) => { - if (pkg.includes('@redwoodjs')) { - return `${pkg}@${getInstalledRedwoodVersion()}` - } else { - return pkg - } - }) - - let installCommand - // if web,api - if (side !== 'project') { - installCommand = [ - 'yarn', - [ - 'workspace', - side, - 'add', - devDependency && '--dev', - ...packagesWithSameRWVersion, - ].filter(Boolean), - ] - } else { - const stdout = execSync('yarn --version') - - const yarnVersion = stdout.toString().trim() - - installCommand = [ - 'yarn', - [ - yarnVersion.startsWith('1') && '-W', - 'add', - devDependency && '--dev', - ...packagesWithSameRWVersion, - ].filter(Boolean), - ] - } - - return { - title: `Adding dependencies to ${side}`, - task: async () => { - await execa(...installCommand) - }, - } -} - /** * * Use this to add files to a users project diff --git a/packages/cli/src/commands/setup/deploy/providers/baremetal.js b/packages/cli/src/commands/setup/deploy/providers/baremetal.js index de8c639e1dcd..d6e64ca53a0c 100644 --- a/packages/cli/src/commands/setup/deploy/providers/baremetal.js +++ b/packages/cli/src/commands/setup/deploy/providers/baremetal.js @@ -5,9 +5,9 @@ import { Listr } from 'listr2' import { errorTelemetry } from '@redwoodjs/telemetry' -import { getPaths } from '../../../../lib' +import { addPackagesTask, getPaths } from '../../../../lib' import c from '../../../../lib/colors' -import { addFilesTask, addPackagesTask, printSetupNotes } from '../helpers' +import { addFilesTask, printSetupNotes } from '../helpers' import { DEPLOY, ECOSYSTEM, MAINTENANCE } from '../templates/baremetal' export const command = 'baremetal' diff --git a/packages/cli/src/commands/setup/deploy/providers/layer0.js b/packages/cli/src/commands/setup/deploy/providers/layer0.js index 9fd69d273c46..9310bacf0c9c 100644 --- a/packages/cli/src/commands/setup/deploy/providers/layer0.js +++ b/packages/cli/src/commands/setup/deploy/providers/layer0.js @@ -4,17 +4,13 @@ import { Listr } from 'listr2' import { errorTelemetry } from '@redwoodjs/telemetry' -import { getPaths } from '../../../../lib' +import { addPackagesTask, getPaths } from '../../../../lib' import c from '../../../../lib/colors' import { ERR_MESSAGE_MISSING_CLI, ERR_MESSAGE_NOT_INITIALIZED, } from '../../../deploy/layer0' -import { - preRequisiteCheckTask, - printSetupNotes, - addPackagesTask, -} from '../helpers' +import { preRequisiteCheckTask, printSetupNotes } from '../helpers' export const command = 'layer0' export const description = 'Setup Layer0 deploy' diff --git a/packages/cli/src/commands/setup/deploy/providers/serverless.js b/packages/cli/src/commands/setup/deploy/providers/serverless.js index 7a6fdedfa45e..cfce08750be9 100644 --- a/packages/cli/src/commands/setup/deploy/providers/serverless.js +++ b/packages/cli/src/commands/setup/deploy/providers/serverless.js @@ -6,13 +6,12 @@ import { Listr } from 'listr2' import { errorTelemetry } from '@redwoodjs/telemetry' -import { getPaths } from '../../../../lib' +import { addPackagesTask, getPaths } from '../../../../lib' import c from '../../../../lib/colors' import { addToGitIgnoreTask, addToDotEnvTask, addFilesTask, - addPackagesTask, printSetupNotes, } from '../helpers' import { SERVERLESS_API_YML } from '../templates/serverless/api' diff --git a/packages/cli/src/lib/index.js b/packages/cli/src/lib/index.js index 7522ab99fd7f..8005e92a4a43 100644 --- a/packages/cli/src/lib/index.js +++ b/packages/cli/src/lib/index.js @@ -1,3 +1,4 @@ +import { execSync } from 'child_process' import fs from 'fs' import https from 'https' import path from 'path' @@ -169,6 +170,7 @@ export const saveRemoteFileToDisk = ( export const getInstalledRedwoodVersion = () => { try { + // @ts-ignore TS Config issue, due to src being the rootDir const packageJson = require('../../package.json') return packageJson.version } catch (e) { @@ -453,6 +455,66 @@ export const removeRoutesFromRouterTask = (routes, layout) => { }) } +/** + * + * Use this util to install dependencies on a user's Redwood app + * + * @example addPackagesTask({ + * packages: ['fs-extra', 'somePackage@2.1.0'], + * side: 'api', // <-- leave empty for project root + * devDependency: true + * }) + */ +export const addPackagesTask = ({ + packages, + side = 'project', + devDependency = false, +}) => { + const packagesWithSameRWVersion = packages.map((pkg) => { + if (pkg.includes('@redwoodjs')) { + return `${pkg}@${getInstalledRedwoodVersion()}` + } else { + return pkg + } + }) + + let installCommand + // if web,api + if (side !== 'project') { + installCommand = [ + 'yarn', + [ + 'workspace', + side, + 'add', + devDependency && '--dev', + ...packagesWithSameRWVersion, + ].filter(Boolean), + ] + } else { + const stdout = execSync('yarn --version') + + const yarnVersion = stdout.toString().trim() + + installCommand = [ + 'yarn', + [ + yarnVersion.startsWith('1') && '-W', + 'add', + devDependency && '--dev', + ...packagesWithSameRWVersion, + ].filter(Boolean), + ] + } + + return { + title: `Adding dependencies to ${side}`, + task: async () => { + await execa(...installCommand) + }, + } +} + export const runCommandTask = async (commands, { verbose }) => { const tasks = new Listr( commands.map(({ title, cmd, args, opts = {}, cwd = getPaths().base }) => ({ diff --git a/packages/testing/cache/index.js b/packages/testing/cache/index.js new file mode 100644 index 000000000000..215cb6cd0dc8 --- /dev/null +++ b/packages/testing/cache/index.js @@ -0,0 +1,2 @@ +/* eslint-env es6, commonjs */ +module.exports = require('../dist/cache') diff --git a/packages/testing/cache/package.json b/packages/testing/cache/package.json new file mode 100644 index 000000000000..5e13515fce56 --- /dev/null +++ b/packages/testing/cache/package.json @@ -0,0 +1,4 @@ +{ + "main": "./index.js", + "types": "../dist/cache/index.d.ts" +} diff --git a/packages/testing/package.json b/packages/testing/package.json index 97931e723847..f09db75c8529 100644 --- a/packages/testing/package.json +++ b/packages/testing/package.json @@ -14,6 +14,7 @@ "config", "web", "api", + "cache", "dist" ], "scripts": { diff --git a/packages/testing/src/cache/index.ts b/packages/testing/src/cache/index.ts new file mode 100644 index 000000000000..c0f373e3fca6 --- /dev/null +++ b/packages/testing/src/cache/index.ts @@ -0,0 +1,179 @@ +import type { InMemoryClient } from '@redwoodjs/api/cache' + +type AsymmetricMatcher = { + $$typeof: symbol +} + +type ExpectedValue = Array | any | AsymmetricMatcher +type ExpectedKey = string | RegExp +// Custom Jest matchers to be used with Redwood's server caching +// Just needs a global import like import '@redwoodjs/testing/cache' + +expect.extend({ + toHaveCached( + cacheClient: InMemoryClient, + keyOrExpectedValue: ExpectedKey | ExpectedValue, + expectedValue?: ExpectedValue + ) { + let value: ExpectedValue + let regexKey: RegExp | undefined + let stringKey: string | undefined + + // Figures out which form of this function we're calling: + // with one or two arguments + + if (_isKVPair(keyOrExpectedValue, expectedValue)) { + // Two argument form, the key that is caching it and the value that is cached: + // toHaveCached('cache-key', { foo: 'bar' }) + if (keyOrExpectedValue instanceof RegExp) { + regexKey = keyOrExpectedValue + } else { + stringKey = keyOrExpectedValue + } + value = expectedValue + } else { + // One argument form, only check the value that's cached: + // toHaveCached({ foo: 'bar' }) + value = keyOrExpectedValue + } + + let foundKVPair: { key: string; value: any } | undefined + let found = false + + // If its a stringKey we can do direct lookup + if (stringKey) { + return _checkValueForKey(cacheClient, stringKey, value) + } else { + // For RegEx expectedKey or just a value check, we need to iterate + for (const [cachedKey, cachedValue] of Object.entries( + cacheClient.storage + )) { + if (found) { + break + } + + if (regexKey?.test(cachedKey)) { + found = true + foundKVPair = { key: cachedKey, value: cachedValue.value } + } else { + // no key was passed, just match on value + found = cachedValue.value === JSON.stringify(value) + } + } + } + + // Key was supplied as a regex + // So we check if the value is cached, and return early + if (foundKVPair) { + return _checkValueForKey(cacheClient, foundKVPair.key, value) + } + + if (found) { + return { + pass: true, + message: () => 'Found cached value', + } + } else { + return { + pass: false, + message: () => + `Expected Cached Value: ${this.utils.printExpected( + JSON.stringify(value) + )}\n` + + `Cache Contents: ${this.utils.printReceived(cacheClient.storage)}`, + } + } + }, +}) + +const _isKVPair = ( + keyOrCachedValue: ExpectedKey | ExpectedValue, + cachedValue?: ExpectedValue +): keyOrCachedValue is ExpectedKey => { + return !!cachedValue && !!keyOrCachedValue +} + +const _checkValueForKey = ( + cacheClient: InMemoryClient, + cacheKey: string, + expectedValue: ExpectedValue +) => { + try { + const cachedStringValue = cacheClient.storage[cacheKey]?.value + + // Check if its a jest asymmetric matcher i.e. objectContaining, arrayContaining + const expectedValueOrMatcher = + expectedValue?.$$typeof === Symbol.for('jest.asymmetricMatcher') + ? expectedValue + : JSON.parse(JSON.stringify(expectedValue)) // Because e.g. dates get converted to string, when cached + + expect( + cachedStringValue ? JSON.parse(cachedStringValue) : undefined + ).toEqual(expectedValueOrMatcher) + + return { + pass: true, + message: () => `Found cached value with ${cacheKey}`, + } + } catch (e: any) { + // Return the message from jest's helpers so they get a nice diff + // and exit early! + return { + pass: false, + message: () => e.message, + } + } +} + +declare global { + // eslint-disable-next-line @typescript-eslint/no-namespace + namespace jest { + interface Matchers { + /** + * + * Use this helper to simplify testing your InMemoryCache client. + * + * The expected value you provide will be serialized and deseriliazed for you. + * + * NOTE: Does not support partialMatch - use cacheClient.contents or test with a key! + * @param expectedValue The value that is cached, must be serializable + */ + toHaveCached(expectedValue: unknown): R + + /** + * + * Use this helper to simplify testing your InMemoryCache client. + * + * + * @param cacheKey They key that your value is cached under + * @param expectedValue The expected value. Can be a jest asymmetric matcher (using `partialMatch`) + */ + toHaveCached(cacheKey: ExpectedKey, expectedValue: ExpectedValue): R + } + } +} + +/** + * This is just syntactic sugar, to help with testing cache contents. + * + * If you pass an array, it will check arrays for a partial match of the object. + * + * If you pass an object, it will check for a partial match of the object. + * + * Useful when you don't want to compare dates/auto-generated ids etc. + * + * @example + * expect(testCacheClient.contents).toContainEqual(partialMatch({ title: 'Only look for this title'})) + * + * @example + * expect(testCacheClient.contents).toContainEqual(partialMatch([{id: 1}, {id: 2}])) + * + * @param value Object or Array of object to match + */ +export const partialMatch = ( + value: Record | Array> +) => { + return Array.isArray(value) + ? expect.arrayContaining(value.map((v) => expect.objectContaining(v))) + : expect.objectContaining(value) +} diff --git a/yarn.lock b/yarn.lock index 028884b6b0fe..a7f371eb832f 100644 --- a/yarn.lock +++ b/yarn.lock @@ -6289,6 +6289,62 @@ __metadata: languageName: node linkType: hard +"@redis/bloom@npm:1.0.2": + version: 1.0.2 + resolution: "@redis/bloom@npm:1.0.2" + peerDependencies: + "@redis/client": ^1.0.0 + checksum: 1ec820145f58b5b86628b90e284a736c94ebe7747bbcf7fae968c268d1aab4f13c488808c8c1c75ad8b5fcf0a4e9b3ef5ba4d7d68585abe324461b119e72c5bb + languageName: node + linkType: hard + +"@redis/client@npm:1.2.0": + version: 1.2.0 + resolution: "@redis/client@npm:1.2.0" + dependencies: + cluster-key-slot: 1.1.0 + generic-pool: 3.8.2 + yallist: 4.0.0 + checksum: 89d084fb9fc4695857e875a3c55bef23b6f078f51df5e485004068a1790a80b71394d749ac86ea889bdf544b61d91c1601d96fa94ed7269ca9a1899edc228985 + languageName: node + linkType: hard + +"@redis/graph@npm:1.0.1": + version: 1.0.1 + resolution: "@redis/graph@npm:1.0.1" + peerDependencies: + "@redis/client": ^1.0.0 + checksum: 32821fd98641727946011e836e32a33123083f3b00410d590681282ed162b70125e796e5ba92e9e503ce708647625a4c024779d2866cfb38475ae9b6d5dccd21 + languageName: node + linkType: hard + +"@redis/json@npm:1.0.3": + version: 1.0.3 + resolution: "@redis/json@npm:1.0.3" + peerDependencies: + "@redis/client": ^1.0.0 + checksum: dfec3fbd1225e023effcb2e1bf14ed1756384df3df9fa038969ea117ec19ee200df0496391778729a4473dfe974e38a0673822e4b66fbe5691f4dc9e2b4d4977 + languageName: node + linkType: hard + +"@redis/search@npm:1.0.6": + version: 1.0.6 + resolution: "@redis/search@npm:1.0.6" + peerDependencies: + "@redis/client": ^1.0.0 + checksum: fdec49cd36a3fde6add844e7f6417f1abdc04fa4de9aa58878fefca0e48b64c2d9c1d9683956aa671182788bb443c153dbc5ca7b9991e4495152764a0b5b6a8e + languageName: node + linkType: hard + +"@redis/time-series@npm:1.0.3": + version: 1.0.3 + resolution: "@redis/time-series@npm:1.0.3" + peerDependencies: + "@redis/client": ^1.0.0 + checksum: ce79b8cb123e4b79ef73daaf44ec69ef85fbba5f8791c51db3952db19440699929d30b7ea4d7b9483f6afb9c1fc2a7d3b793d3c5a0c981443fc129a26f43168a + languageName: node + linkType: hard + "@redwoodjs/api-server@3.2.0, @redwoodjs/api-server@workspace:packages/api-server": version: 0.0.0-use.local resolution: "@redwoodjs/api-server@workspace:packages/api-server" @@ -6343,6 +6399,7 @@ __metadata: "@types/crypto-js": 4.1.1 "@types/jsonwebtoken": 8.5.9 "@types/md5": 2.3.2 + "@types/memjs": 1 "@types/pascalcase": 1.0.1 "@types/split2": 3.2.1 "@types/uuid": 8.3.4 @@ -6356,8 +6413,10 @@ __metadata: jsonwebtoken: 8.5.1 jwks-rsa: 2.0.5 md5: 2.3.0 + memjs: 1.3.0 pascalcase: 1.0.0 pino: 8.7.0 + redis: 4.2.0 split2: 4.1.0 title-case: 3.0.3 typescript: 4.7.4 @@ -9268,6 +9327,15 @@ __metadata: languageName: node linkType: hard +"@types/memjs@npm:1": + version: 1.2.4 + resolution: "@types/memjs@npm:1.2.4" + dependencies: + "@types/node": "*" + checksum: 119e252cba343f01d20a58a7065a2659de6709a2756ccdf02a841a6b0a035bdc751662a71c84ae516d6caf863834e11be10b6fbf6e8ab70e410eba6a1579a66a + languageName: node + linkType: hard + "@types/micromatch@npm:*": version: 4.0.2 resolution: "@types/micromatch@npm:4.0.2" @@ -13203,6 +13271,13 @@ __metadata: languageName: node linkType: hard +"cluster-key-slot@npm:1.1.0": + version: 1.1.0 + resolution: "cluster-key-slot@npm:1.1.0" + checksum: e72a437ba57f79f8435bf8ea689d0a13aed7e9628738c545af09a77199bc50986ecff05840c34303f41cf1837e8e9aa49709455a7313b7ed200090cca4aae57a + languageName: node + linkType: hard + "cmd-shim@npm:^5.0.0": version: 5.0.0 resolution: "cmd-shim@npm:5.0.0" @@ -17792,6 +17867,13 @@ __metadata: languageName: node linkType: hard +"generic-pool@npm:3.8.2": + version: 3.8.2 + resolution: "generic-pool@npm:3.8.2" + checksum: 8bff2b77da4c082015f0feeb6300557654b842dc302585b6cdb10afb0dc4db717e6ebce838b334773719289235fb4959aa4adf7527d4da79c007b0b94fb63172 + languageName: node + linkType: hard + "gensync@npm:^1.0.0-beta.1, gensync@npm:^1.0.0-beta.2": version: 1.0.0-beta.2 resolution: "gensync@npm:1.0.0-beta.2" @@ -22597,6 +22679,13 @@ __metadata: languageName: node linkType: hard +"memjs@npm:1.3.0": + version: 1.3.0 + resolution: "memjs@npm:1.3.0" + checksum: 0fc39e296742610664bef31303fad16ead5eb588caf53ba0bd0b8df2c31992055dece7986fd55f23236f5ff40f6f4c5bf808a93a7976469c3e6c2ce4cafecf7f + languageName: node + linkType: hard + "memoize-one@npm:^5.0.0": version: 5.2.1 resolution: "memoize-one@npm:5.2.1" @@ -26884,6 +26973,20 @@ __metadata: languageName: node linkType: hard +"redis@npm:4.2.0": + version: 4.2.0 + resolution: "redis@npm:4.2.0" + dependencies: + "@redis/bloom": 1.0.2 + "@redis/client": 1.2.0 + "@redis/graph": 1.0.1 + "@redis/json": 1.0.3 + "@redis/search": 1.0.6 + "@redis/time-series": 1.0.3 + checksum: d3232638e4dada24283d2e388d36e9874af2d01e89a2f4a07dd19992065c0ba75628ac40a95419cb5b44b6dac0287a71f753b7b62086225af41cee7fca536ea4 + languageName: node + linkType: hard + "regenerate-unicode-properties@npm:^10.0.1": version: 10.0.1 resolution: "regenerate-unicode-properties@npm:10.0.1" @@ -32220,6 +32323,13 @@ __metadata: languageName: node linkType: hard +"yallist@npm:4.0.0, yallist@npm:^4.0.0": + version: 4.0.0 + resolution: "yallist@npm:4.0.0" + checksum: 2286b5e8dbfe22204ab66e2ef5cc9bbb1e55dfc873bbe0d568aa943eb255d131890dfd5bf243637273d31119b870f49c18fcde2c6ffbb7a7a092b870dc90625a + languageName: node + linkType: hard + "yallist@npm:^2.0.0": version: 2.1.2 resolution: "yallist@npm:2.1.2" @@ -32234,13 +32344,6 @@ __metadata: languageName: node linkType: hard -"yallist@npm:^4.0.0": - version: 4.0.0 - resolution: "yallist@npm:4.0.0" - checksum: 2286b5e8dbfe22204ab66e2ef5cc9bbb1e55dfc873bbe0d568aa943eb255d131890dfd5bf243637273d31119b870f49c18fcde2c6ffbb7a7a092b870dc90625a - languageName: node - linkType: hard - "yaml-ast-parser@npm:^0.0.43": version: 0.0.43 resolution: "yaml-ast-parser@npm:0.0.43"