-
Adding a new
materialize_network_policy
resource and data source #669.A network policy allows you to manage access to the system through IP-based rules.
-
Example
materialize_network_policy
resource:resource "materialize_network_policy" "office_policy" { name = "office_access_policy" rule { name = "new_york" action = "allow" direction = "ingress" address = "8.2.3.4/28" } rule { name = "minnesota" action = "allow" direction = "ingress" address = "2.3.4.5/32" } comment = "Network policy for office locations" }
-
Example
materialize_network_policy
data source:data "materialize_network_policy" "all" {}
-
Added support for the new
CREATENETWORKPOLICY
system privilege:resource "materialize_role" "test" { name = "test_role" } resource "materialize_grant_system_privilege" "role_createnetworkpolicy" { role_name = materialize_role.test.name privilege = "CREATENETWORKPOLICY" }
-
An initial
default
network policy will be created. This policy allows open access to the environment and can be altered by asuperuser
. Use theALTER SYSTEM SET network_policy TO 'office_access_policy'
command or thematerialize_system_parameter
resource to update the default network policy.resource "materialize_system_parameter" "system_parameter" { name = "network_policy" value = "office_access_policy" }
-
- Updated the cluster and cluster replica query builders to skip
DISK
property forcc
andC
clusters as this is enabled by default for those sizes #671
- Upgrade from
pgx
v3 to v4 #663 - Routine dependency updates: #668, #667
- Upgraded Go version from
1.22.0
to1.22.7
for improved performance and security fixes #669 - Added
--bootstrap-builtin-analytics-cluster-replica-size
to the Docker compose file to fix failing tests #671
- Add support for
partition_by
attribute inmaterialize_sink_kafka
#659- The
partition_by
attribute accepts a SQL expression used to partition the data in the Kafka sink. Can only be used withENVELOPE UPSERT
. - Example usage:
resource "materialize_sink_kafka" "orders_kafka_sink" { name = "orders_sink" kafka_connection { name = "kafka_connection" } topic = "orders_topic" partition_by = "column_name" # Additional configuration... }
- The
- Set
transaction_isolation
as conneciton option instead of executing aSET
command #660 - Routine dependency updates: #661
- Explicitly set
TRANSACTION_ISOLATION
toSTRICT SERIALIZABLE
#657 - Fix user not found state status in the
materialize_user
resource #638 - Fix Inconsistent Error Handling in
ReadUser
in thematerialize_user
resource #642
- Update Go version to 1.22 #650
- Switched tests to use a stable version of the Rust Frontegg mock service #653
- Improve the Cloud Mock Service #651
- Disable telemetry in CI #640
- Add
wait_until_ready
option tocluster
resources, which allows graceful cluster reconfiguration (i.e., with no downtime) for clusters with no sources or sinks. #632- Example usage:
resource "materialize_cluster" "cluster" { name = var.mz_cluster size = "25cc" wait_until_ready { enabled = true timeout = "10m" on_timeout = "COMMIT" } }
- Add support for AWS IAM authentication in
materialize_connection_kafka
#627- Example usage:
# Create an AWS connection for IAM authentication resource "materialize_connection_aws" "msk_auth" { name = "aws_msk" assume_role_arn = "arn:aws:iam::123456789012:role/MaterializeMSK" } # Create a Kafka connection using AWS IAM authentication resource "materialize_connection_kafka" "kafka_msk" { name = "kafka_msk" kafka_broker { broker = "b-1.your-cluster-name.abcdef.c1.kafka.us-east-1.amazonaws.com:9098" } security_protocol = "SASL_SSL" aws_connection { name = materialize_connection_aws.msk_auth.name database_name = materialize_connection_aws.msk_auth.database_name schema_name = materialize_connection_aws.msk_auth.schema_name } }
- Fix
materialize_connection_aws
read function issues caused by empty internal table #630 - Fix duplicate application name in the provider configuration #626
-
Add new
identify_by_name
option formaterialize_cluster
resource #618- When set to
true
, the cluster name is used as the Terraform resource ID instead of the internal cluster ID - This eliminates the need to update the Terraform state file if a cluster is recreated with the same name but a different ID outside of Terraform (e.g., via the Materialize UI or dbt)
- The resource now uses the format
region:type:value
for IDs, where type is either "name" or "id"
Example usage:
resource "materialize_cluster" "test_name_as_id" { name = "test_name_as_id" size = "25cc" replication_factor = "1" identify_by_name = true # Set to true to use the cluster name as the resource ID }
Existing
materialize_cluster
resources will be automatically migrated to the new ID format. To use this new feature:- Update your Terraform configuration to
v0.8.6
or later of the Materialize provider. - Run
terraform init
to download the new provider version. - Run
terraform plan
andterraform refresh
to verify the changes.
No manual intervention is required for existing resources, but reviewing the plan is recommended to ensure the expected updates are made.
- When set to
- Add
materialize_source_mysql
tests for theignore_columns
attribute #616 - Extend integration tests to run create resources in two regions different #614
- Change cluster option from
REHYDRATION TIME ESTIMATE
toHYDRATION TIME ESTIMATE
#603 - Add support for upsert options error decoding alias #612:
envelope { upsert = true upsert_options { value_decoding_errors { inline { enabled = true alias = "my_error_col" } } } }
- Update Terraform docs examples #613
- Add support for adding and removing subsources for the
materialize_source_mysql
resource #604 - Allow the
roles
attribute for thematerialize_user
resource to be updated withoutforceNew
#610 - Add
key_compatibility_level
andvalue_compatibility_level
attributes to thematerialize_sink_kafka
resource #600 - Add
progress_topic_replication_factor
attribute to thematerialize_connection_kafka
resource #598 - Add topics options to the
materialize_sink_kafka
resource #597. See the Kafka documentation for available configs. For example:topic_replication_factor = 1 topic_partition_count = 6 topic_config = { "cleanup.policy" = "compact" "retention.ms" = "86400000" }
- Fix a bug in the
materialize_source_kafka
resource where the value format JSON was not processed correctly #607
- Fix CI intermittent failing tests #609
- Fix connections data source tests #594
- Routine dependency updates: #608
-
New data source:
materialize_user
#592. This data source allows retrieving information about existing users in an organization by email. It can be used together with thematerialize_user
resource for importing existing users:# Retrieve an existing user by email data "materialize_user" "example" { email = "[email protected]" } output "user_id" { value = data.materialize_user.example.id }
Define the
materialize_user
resource and import the existing user:resource "materialize_user" "example" { email = "[email protected]" }
Import command:
# terraform import materialize_user.example ${data.materialize_user.example.id}
- Refactor to only fetch the list of Frontegg roles once per Terraform provider invocation #595.
- Improve Frontegg HTTP mock server to improve maintainability #593.
- Resource Kafka Sink: Add
headers
attribute #569 - Resource Kafka Source: Add upsert options
value_decoding_errors
attribute #586
- Rename
BOOTSTRAP_BUILTIN_INTROSPECTION_CLUSTER_REPLICA_SIZE
#584
-
Allow creating service app passwords, which are app passwords that are associated with a user without an email address.
For example, here's how you might provision an app password for a production dashboard application that should have the
USAGE
privilege on theproduction_analytics
database:resource "materialize_role" "production_dashboard" { name = "svc_production_dashboard" } resource "materialize_app_password" "production_dashboard_app_password" { name = "production_dashboard_app_password" type = "service" user = materialize_role.production_dashboard.name roles = ["Member"] } resource "materialize_database_grant" "database_grant_usage" { role_name = materialize_role.production_dashboard.name privilege = "USAGE" database_name = "production_analytics" }
-
Allow skipping activation emails when creating users #573
-
Allow
resource_sink_kafka
FROM
attribute updates #578
-
This release introduces a breaking change to the
materialize_source_postgres
resources configuration: #487- The
schema
property is removed: Theschema
property is removed from thematerialize_source_postgres
resource configuration. Users must now explicitly define thetable
block to specify the tables to include in the source. This change is designed to ensure consistency and predictability in the Terraform provider's behavior. - The
table
block is now required: Previously, thetable
block was optional, allowing users to specify specific tables to include in the source. Starting with versionv0.8.0
, thetable
block is now required. Users must explicitly define the tables to be included in the source. This change is designed to ensure consistency and predictability in the Terraform provider's behavior. - Changes to the
table
block: Thetables
property schema has been updated as follows:
table { upstream_name = string # Required: The name of the table in the upstream database: Previously `name` upstream_schema_name = string # The schema of the table in the upstream database name = string # The name of the table in Materialize: Previously `alias` schema_name = string # The schema of the table in Materialize datatabase_name = string # The name of the database where the table will be created in Materialize }
- Migration Guide: For a detailed guide on adapting to these changes, refer to the migration guide here
- The
-
The
subsource
read-only attribute is removed from all source resources as part of a change to align with Materialize's internal behavior.
- Routine dependency updates: #564
- Update
region
attribute for all resources to becomputed
#559 - Add
connection_id
attribute formaterialize_connection
data source #553
- Check for
nil
values inGetSliceValueString
#552 - Fix
materialize_connection_kafka
rename race condition #561
public
schemas are no longer created by default: In previous versions, thematerialize_database
resource automatically created apublic
schema in each new database, mimicking traditional SQL database behavior. Starting with versionv0.7.0
, this default behavior has been removed. Users must now explicitly define and managepublic
schemas within their Terraform configurations. This change is designed to align the Terraform provider's behavior more closely with its design principles, ensuring consistency and predictability.- Action Required: Explicitly define
public
schemas in your Terraform configurations if needed. Along with the required grantUSAGE
to thePUBLIC
pseudo-role for the public schema - Migration Guide: This only affects newly created databases. Details on adapting to this change are available here
- Action Required: Explicitly define
- Add scheduling attribute to the
materialize_cluster
resource #545
- Fix an issue where resource imports were failing when using a non-default region #550
- Routine dependency updates #549
-
Allow
ALTER CONNECTION
updates for the following connection resources:materialize_connection_mysql
resource (#541)materialize_connection_confluent_schema_registry
resource (#540)materialize_connection_kafka
resource (#538)materialize_connection_aws_privatelink
resource (#533)materialize_connection_aws
resource (#529)
-
Add support for Frontegg SCIM groups which includes the following new resources (#525):
materialize_scim_group
materialize_scim_group_roles
materialize_scim_group_users
-
Add support for the key value load generator source (#537)
-
New
materialize_region
resource (#535) -
Add
validate
parameter tomaterialize_aws_privatelink
connection (#539) -
Remove
idle_arrangement_merge_effort
option frommaterialize_cluster
(#532)
- Add additional
materialize_connection_postgres
unit tests (#542) - Define builtin probe cluster size (#536)
- Remove unnecessary SSH connections in Postgres tests (#531)
- Refactor the Frontegg package (#530)
- Use unique SSH conn name to fix CI flakes (#527)
- Add
PUBLIC
pseudo-role to resource grants #524 - Add
materialize_role_parameter
resource #522 - Allow
ALTER CONNECTION
updates forresource_connection_postgres
#511 - Allow
ALTER CONNECTION
updates for SSH tunnels #523
- Fix failing environmentd bootstrap #512
- Added acceptance tests for:
- Allow
region
option for data sources #506 - Remove scale factor for auction/counter load generator sources #502
- Add cluster availability zone attribute #498
- Added acceptance tests for SSO group mapping resource #505
- Added acceptance tests for SSO default roles resource #503
- Added acceptance tests for SSO domain resource #497
- New resource
materialize_connection_mysql
#480 - New resource
materialize_source_mysql
#486 - New resource
materialize_connection_aws
#492
- Add region prefix hint message #488
- Acceptance tests for SCIM config resource #483
- Acceptance tests for SCIM config data source #485
- Dependency updates: #484, #494, #495
- Add support for top level PrivateLink connections to Kafka #471
- New resource:
materialize_system_parameter
#464 - New data source:
materialize_system_parameter
#464 - Add the new
cc
cluster sizes #467
- Fix the SSO configuration SP Entity ID definition #456
- Fix the SSO configuration SP Entity ID definition #456
- New resource:
materialize_scim_config
#449
- Drop
SIZE
support for sources and sinks #438
- Make
cluster_name
parameter required formaterialized_view
andindex
resources #435 - Include
create_sql
forview
andmaterialized_view
#436 - New resources: #442
materialize_sso_config
: Manages SSO configuration detailsmaterialize_sso_default_roles
: Manages SSO default rolesmaterialize_sso_domain
: Manages SSO domainsmaterialize_sso_group_mappings
: Manages SSO group mappings
- New data sources:
materialize_scim_configs
: Fetches SCIM configuration detailsmaterialize_scim_groups
: Fetches SCIM group detailsmaterialize_sso_config
: Fetches SSO configuration details
- Add Tests for
INCLUDE KEY AS
for kafka sources #439 - Mark comments as public preview #440
- Dependabot updates: #441, #443
- Introduced a unified interface for managing both global and regional resources.
- Implemented single authentication using an app password for all operations.
- Added dynamic client allocation for managing different resource types.
- Enhanced provider configuration with parameters for default settings and optional endpoint overrides.
- New resources:
- App passwords:
materialize_app_password
. - User management
materialize_user
.
- App passwords:
- Added data sources for fetching region details (
materialize_region
). - Implemented support for establishing SQL connections across multiple regions.
- Introduced a new
region
parameter in all resource and data source configurations. This allows users to specify the region for resource creation and data retrieval.
-
Provider Configuration Changes:
- Deprecated the
host
,port
, anduser
parameters in the provider configuration. These details are now derived from the app password. - Retained only the
password
definition in the provider configuration. This password is used to fetch all necessary connection information.
- Deprecated the
-
New
region
Configuration:- Introduced a new
default_region
parameter in the provider configuration. This allows users to specify the default region for resource creation. - The
default_region
parameter can be overridden in specific resource configurations if a particular resource needs to be created in a non-default region.
provider "materialize" { password = var.materialize_app_password default_region = "aws/us-east-1" } resource "materialize_cluster" "cluster" { name = "cluster" region = "aws/us-west-2" }
- Introduced a new
- Mock Services for Testing:
- Added a new mocks directory, which includes mock services for the Cloud API and the FrontEgg API.
- These mocks are intended for local testing and CI, facilitating development and testing without the need for a live backend.
- Before upgrading to
v0.5.0
, users should ensure that they have upgraded tov0.4.x
which introduced the Terraform state migration necessary forv0.5.0
. After upgrading tov0.4.x
, users should runterraform plan
to ensure that the state migration has completed successfully. - Users upgrading to
v0.5.0
should update their provider configurations to remove thehost
,port
, anduser
parameters and ensure that thepassword
parameter is set with the app password. - For managing resources across multiple regions, users should specify the
default_region
parameter in their provider configuration or override it in specific resource blocks as needed using theregion
parameter.
- Rename "default" cluster to "quickstart" as part of a change on the Materialize side #423
- Add
COMPRESSION TYPE`` option to
materialize_sink_kafka` resource #414
- Fix Kafka offset acceptance tests #418
- Allow Avro comments (
avro_doc_type
andavro_doc_column
) for resourcematerialize_sink_kafka
#373
- Include additional acceptance tests for datasources #410
- Improved ID structuring in Terraform state file with region-prefixed IDs, enhancing state management to allow supporting new features like managing cloud resources (#400, #401, #402, and #406)
- Add
ssh_tunnel
as a broker level attribute formaterialize_connection_kafka
.ssh_tunnel
can be applied as a top level attribute (the default for all brokers) or both the individual broker level #366
- Allow
PUBLIC
asgrantee
for default grant resources #397
- Add
default
to columns when defining amaterialize_table
#374 - Add
expose_progress
tomaterialize_source_load_generator
#374 - Support row type in
materialize_type
#374
- Fix
expose_progress
inmaterialize_source_postgres
andmaterialize_source_kafka
#374 - Fix
start_offset
inmaterialize_source_kafka
#374 - Allow
replication_factor
of 0 formaterialize_cluster
#390
- Set
replication_factor
as computed inmaterialize_cluster
#374
- Remove
session_variables
frommaterialize_role
#374
- Fix default grant read #381
- Add
security_protocol
tomaterialize_connection_kafka
#365
- Handle
user
values that contain special characters, without requiring manual URL escaping (e.g., escaping[email protected]
asyou%40corp.com
) #372 - Load generator source
TPCH
requiresALL TABLES
#377 - Improve grant reads #378
- Add
key_not_enforced
tomaterialize_sink_kafka
#361
- Fix a bug where topics were defined after keys in
materialize_sink_kafka
create statements #358 - Correct
ForceNew
for column attributes inmaterialize_table
#363
- Update go.mod version to
1.20
#369
- Previously, blocks within resources that included optional
schema_name
anddatabase_name
attributes would inherit the top level attributes of the resource if set. So in the following example:The Postgres connection would have the schema name ofresource "materialize_source_postgres" "example_source_postgres" { name = "source_postgres" schema_name = "my_schema" database_name = "my_database" postgres_connection { name = "postgres_connection" } }
my_schema
and database namemy_database
. Now, ifschema_name
ordatabase_name
are not set, they will use the same defaults as top level attributes (public
for schema andmaterialize
for database) #353
- Include detail and hint messages for SQL errors #354
- Support
ASSERT NOT NULL
for materialized view resource #341
- Update testing plugin #345
- Update header attributes for
materialize_source_webhook
. Addsinclude_header
and is now a complex typeinclude_headers
and no longer boolean #346
- Provider configuration parameters so that they are consistent across all components #339:
- The configuration variable
username
is changed touser
- The environment variable
MZ_PW
is changed toMZ_PASSWORD
- The configuration variable
- Fix
grantRead
failures if the underlying object that the grant is on has been dropped #338
- Prevent force new for comments on cluster replicas, indexes and roles #333
- Mask the local sizes for cluster replicas used by Docker #355
- Support for
COMMENTS
on resources #324
- Add support for format
JSON
in Kafka source #305
- Remove `Table`` attributes for load gen source #303
- Fix
ALL ROLES
for default grants #300
- Support FOR SCHEMAS for postgres source #262
- Remove unnecessary default privilege attributes #294
- New resource
materialize_source_webhook
for Webhook Sources #271 - Support
disk
attribute for clusters and replicas #279
- Include missing attributes for managed cluster data sources #282
- Support key rotation for SSH tunnels #278
- Support for
ADD|DROP
tables with postgres sources #265 - Additional attributes for managed clusters #275
- Consistent documentation for common attributes #276
- Correct replica sizes >xlarge #268
- Include
subsource
as computed attribute for sources #263
- Remove managed clusters testing flag #261
- Remove
ownership
for cluster replica resource #259 - Require
target_role_name
for all default privilege resources #260 - Require
col_expr
for index resources #220
- Fix removing grants outside of Terraform state #245
- Support
INCLUDE KEY AS <name>
for Kafka sources #250
- RBAC Refactor #234
- Include
WITH (VALIDATE = false)
for testing #236
- Fix identifier quoting #239
- Qualify role name in grant resources #235
- Revised RBAC resources #218. A full overview of the Terraform RBAC resources can be found in the
rbac.md
- Support Managed Clusters #216
- Support
FORMAT JSON
for sources #227 - Support
EXPOSE PROGRESS
for kafka and postgres sources #213
- Rollback resource creation if ownership query fails #221
- Table context read includes column attributes #215
- As part of the #218 grant resources introduced in
0.0.9
have been renamed frommaterialize_grant_{object}
tomaterialize_{object}_grant
- Resource type
grants
(#191, #205, #209) - Enable resource and data source
roles
#206 - Add attribute
ownership_role
to existing resources (#208, #211)
- Fixes for resource updates (included as part of acceptance test coverage)
- Correct schema index read #202
- Attributes missing force new (#188, #189)
- Include
application_name
in connection string #184
- Include datasource
materialize_egress_ips
- Remove improper validation for cluster replica availability zones
- Include
3xsmall
as a valid size
- Update index queries to use
mz_objects
- Include
cluster_name
as a read parameter for the Materialized view query - Include SSH keys in SSH connection resource
- Cleanup
resources
Functions - Fix Slice Params
- Adds
principal
property to the AWS PrivateLink connection resource
- Remove unnecessary type property
- Dependabot updates
- Fixes to datasources and added coverage to integration tests
- Fixes to
UpdateContext
to resources and added coverage to unit tests
- Change the Go import path
Initial release.