Releases: gofr-dev/gofr
v1.24.0
Release v1.24.0
✨ Features:
-
Cassandra Tracing & Context Support:
We’ve added context support and tracing for Cassandra operations, improving flexibility and observability. The following methods are now available:Single Operations:
QueryWithCtx
: Executes queries with context, binding results to the specified destination.ExecWithCtx
: Executes non-query operations with context.ExecCASWithCtx
: Executes lightweight (CAS) transactions with context.NewBatchWithCtx
: Initializes a new batch operation with context.
Batch Operations:
BatchQueryWithCtx
: Adds queries to batch operations with context.ExecuteBatchWithCtx
: Executes batch operations with context.ExecuteBatchCASWithCtx
: Executes batch operations with context and returns the result.
Note: The following methods in Cassandra have been deprecated:
type Cassandra interface { Query(dest interface{}, stmt string, values ...any) error Exec(stmt string, values ...any) error ExecCAS(dest any, stmt string, values ...any) (bool, error) BatchQuery(stmt string, values ...any) error NewBatch(name string, batchType int) error CassandraBatch } type CassandraBatch interface { BatchQuery(name, stmt string, values ...any) ExecuteBatch(name string) error ExecuteBatchCAS(name string, dest ...any) (bool, error) }
-
JWT Claims Retrieval:
OAuth-enabled applications can now retrieve JWT claims directly within handlers. Here’s an example:func HelloHandler(c *gofr.Context) (interface{}, error) { // Retrieve the JWT claim from the context claimData := c.Context.Value(middleware.JWTClaim) // Assert that the claimData is of type jwt.MapClaims claims, ok := claimData.(jwt.MapClaims) if !ok { return nil, fmt.Errorf("invalid claim data type") } // Return the claims as a response return claims, nil }
🛠️ Fixes:
-
Redis Panic Handling:
Resolved an issue where callingRedis.Ping()
without an active connection caused the application to panic. This is now handled gracefully. -
Docker Example Enhancement:
Thehttp-server
example has been enhanced to include Prometheus and Grafana containers in its Docker setup, allowing users to fully explore GoFr's observability features.
v1.23.0
Release v1.23.0
✨Features:
-
Tracing support added for MongoDB database:
Added tracing capabilities for MongoDB database interactions, extending built-in tracing support across various MongoDB methods. -
Support for binding encoded forms:
Added functionality for binding multipart-form data and URL-encoded form data.- You can use the
Bind
method to map form fields to struct fields by tagging them appropriately. - For more details, visit the documentation.
- You can use the
🛠️Fixes:
- Resolved nil correlationID due to uninitialized exporter:
Addressed an issue that emerged from release v1.22.0, where the trace exporter and provider were not initialized when no configurations were specified. The isssue has been fixed. The trace provider is now set to initialize by default, regardless of the provided configuration.
v1.22.1
Release v1.22.1
✨ Fixes
- Fix clickhouse Import
Importing clickhouse import was failing in version 1.22.0, due to otel tracer package present as indirect dependency.
v1.22.0
Release v1.22.0
✨ Features
-
Support for tracing in clickhouse.
Clickhouse Traces are now added and sent along with the respective request traces. -
Support for sampling traces.
Traces can now be sampled based on the env configTRACER_RATIO
It refers to the proportion of traces that are exported through sampling. It ranges between 0 to 1. By default, this ratio is set to 1, meaning all traces are exported. -
Support Azure Eventhub as external pub-sub datasource.
Eventhub- Eventhub can be used similar to how messages are published and subscribed to KAFKA, MQTT and Google Pubsub.
- To inject Eventhub import it using the following command.
go get gofr.dev/pkg/gofr/datasources/pubsub/eventhub
- Setup Eventhub by calling the AddPubSub method of gofr.
app.AddPubSub(eventhub.New(eventhub.Config{ ConnectionString: "", ContainerConnectionString: "", StorageServiceURL: "", StorageContainerName: "", EventhubName: "", ConsumerGroup: "", }))
Refer documentation to know how to get these values.
-
Support to enable HTTPS in the HTTP server
You can now secure your servers with SSL/TLS certificates by adding the certificates through following configs -CERT_FILE
andKEY_FILE
.
✨ Fixes
- Fix SQLite logs.
Empty strings were coming due to difference in configuration parameters required in SQLite vs other SQL datasource when connecting which has been fixed.
v1.21.0
Release v1.21.0
✨ Features
-
Support for DGraph
Dgraph can be added using the method on gofrApp AddDgraph.
Following methods are supported:
// Dgraph defines the methods for interacting with a Dgraph database.
type Dgraph interface {
// Query executes a read-only query in the Dgraph database and returns the result.
Query(ctx context.Context, query string) (interface{}, error)
// QueryWithVars executes a read-only query with variables in the Dgraph database.
QueryWithVars(ctx context.Context, query string, vars map[string]string) (interface{}, error)
// Mutate executes a write operation (mutation) in the Dgraph database and returns the result.
Mutate(ctx context.Context, mu interface{}) (interface{}, error)
// Alter applies schema or other changes to the Dgraph database.
Alter(ctx context.Context, op interface{}) error
// NewTxn creates a new transaction (read-write) for interacting with the Dgraph database.
NewTxn() interface{}
// NewReadOnlyTxn creates a new read-only transaction for querying the Dgraph database.
NewReadOnlyTxn() interface{}
// HealthChecker checks the health of the Dgraph instance.
HealthChecker
}
To use Dgraph in your GoFr application, follow the steps given below:
Step 1
go get gofr.dev/pkg/gofr/datasource/dgraph
Step 2
app.AddDgraph(dgraph.New(dgraph.Config{
Host: "localhost",
Port: "8080",
}))
GoFr supports both queries and mutations in Dgraph. To know more: Read the Docs
🛠 Enhancements
-
Migrations in Cassandra
Users can now add migrations while usingCassandra
as the datasource. This enhancement assumes that user has already created theKEYSPACE
in cassandra. AKEYSPACE
in Cassandra is a container for tables that defines data replication settings across the cluster. Visit the Docs to know more.
type Cassandra interface {
Exec(query string, args ...interface{}) error
NewBatch(name string, batchType int) error
BatchQuery(name, stmt string, values ...any) error
ExecuteBatch(name string) error
HealthCheck(ctx context.Context) (any, error)
}
To achieve atomicity during migrations, users can leverage batch operations using the NewBatch, BatchQuery, and ExecuteBatch methods. These methods allow multiple queries to be executed as a single atomic operation.
When using batch operations, consider using batchType: LoggedBatch i.e. 0 for atomicity or an UnloggedBatch i.e. 1 for improved performance where atomicity isn't required. This approach provides a way to maintain data consistency during complex migrations.
-
Added mocks for Metrics
MockContainer
can be used to set expectation for metrics in the application while writing test.Usage:
// GoFr's mockContainer _, mock := NewMockContainer(t) // Set mock expectations using the mocks from NewMockContainer mock.Metrics.EXPECT().IncrementCounter(context.Background(), "name") // Call to your function where metrics has to be mocked . . .
v1.20.0
Release v1.20.0
✨ Features
- Support for Solr
Solr can now be used as a datasource, for adding Solr useAddSolr(cfg Solr.Config)
method of gofrApp.
Refer documentation for detailed info.
Supported Functionalities are:Search(ctx context.Context, collection string, params map[string]any) (any, error) Create(ctx context.Context, collection string, document *bytes.Buffer, params map[string]any) (any, error) Update(ctx context.Context, collection string, document *bytes.Buffer, params map[string]any) (any, error) Delete(ctx context.Context, collection string, document *bytes.Buffer, params map[string]any) (any, error) Retrieve(ctx context.Context, collection string, params map[string]any) (any, error) ListFields(ctx context.Context, collection string, params map[string]any) (any, error) AddField(ctx context.Context, collection string, document *bytes.Buffer) (any, error) UpdateField(ctx context.Context, collection string, document *bytes.Buffer) (any, error) DeleteField(ctx context.Context, collection string, document *bytes.Buffer) (any, error)
🛠 Enhancements
- Added mocks for HTTP Service
Mocks to test GoFr HTTP client had to be generated manually. Now, mocks for the HTTP service has been added in GoFr'sMockContainer
.Usage:
// register HTTP services to be mocked httpservices := []string{"cat-facts", "cat-facts1", "cat-facts2"} // pass the httpservices in NewMockContainer _, mock := NewMockContainer(t, WithMockHTTPService(httpservices...)) // Set mock expectations using the mocks from NewMockContainer mock.HTTPService.EXPECT().Get(context.Background(), "fact",map[string]interface{}{ "max_length": 20, }).Return(result, nil) // Call to your function where HTTPService has to be mocked . . .
v1.19.1
Release v1.19.1
🛠 Enhancements
-
Support for S3 operations
FileStore can now be initialised as S3 with theAddFileStore
method.
Since, S3 is an external datasource, it can be imported by:
go get gofr.dev/pkg/gofr/datasource/file/s3
Example:
app.AddFileStore(s3.New(&s3.Config{EndPoint: "http://localhost:4566", BucketName: "gofr-bucket-2", Region: "us-east-1", AccessKeyID: "test", SecretAccessKey: "test"}))
Supported functionalities are:
Create(name string) (File, error) Mkdir(name string, perm os.FileMode) error MkdirAll(path string, perm os.FileMode) error Open(name string) (File, error) OpenFile(name string, flag int, perm os.FileMode) (File, error) Remove(name string) error RemoveAll(path string) error Rename(oldname, newname string) error ReadDir(dir string) ([]FileInfo, error) Stat(name string) (FileInfo, error) Getwd() (string, error)
🐞 Fixes
- Resolved SQL mocks
Previously the mock was not able to mock Query, QueryRow, Select, Dialect, HealthCheck methods. Hence, replaced mock-gen generated SQL mocks with go-mock-sql package in the mock container.
v1.19.0
Release v1.19.0
✨ Features
-
Support for second in cron format
Cron job schedules can now be more precise with an optional addition field for seconds.
Format can now be either 5 part or 6 part, denotingsecond (optional), minute, hour, day_of_month, month, day_of_week
.Example:
// Cron job to run every 10 second app.AddCronJob("*/10 * * * * *", "counter", count) // Cron job to run every 5 minute app.AddCronJob("*/5 * * * *", "counter", count)
-
Support for SFTP operations
FileStore can now be initialised as SFTP with theAddFileStore
method.
Since, SFTP is an external datasource, it can be imported by:
go get gofr.dev/pkg/gofr/datasource/file/sftp
Example:
app.AddFileStore(sftp.New(&sftp.Config{Host: "127.0.0.1", User: "user", Password: "password", Port: 22}))
Supported functionalities are:
Create(name string) (File, error) Mkdir(name string, perm os.FileMode) error MkdirAll(path string, perm os.FileMode) error Open(name string) (File, error) OpenFile(name string, flag int, perm os.FileMode) (File, error) Remove(name string) error RemoveAll(path string) error Rename(oldname, newname string) error ReadDir(dir string) ([]FileInfo, error) Stat(name string) (FileInfo, error) Getwd() (string, error)
🛠 Enhancements
-
Response with Partial Content status code
If the handler returns both data as well as error, then the status code would now be Partial Content206
-
Enhance Observability for FTP
Logs formatting and structuring have been improved.
A new histogram metricapp_ftp_stats
has been introduced to populate data for execution duration with labels astype
andstatus
.
-
Logger mock methods for testing purpose
To help test the logging methods, mocks have now been generated withmockgen
, instead of manually creating for every datasource.
🐞 Fixes
- Resolved permission issues in directories
While creating new directory, the permissions were missing for the directory. Fixed that by providingModePerm
(777
) permissions.
v1.18.0
Release v1.18.0
✨ Features
-
SQL Tags in AddRESTHandlers
TheAddRESTHandlers
function now supports the following SQL tags for enhanced data integrity and database handling:auto_increment
:
When this tag is applied to a struct field, any provided ID value will be ignored. Instead, the ID returned by the database after insertion will be used.not_null
:
This tag enforces a non-null constraint at the service level, ensuring that no nil value can be sent for the specified field.
Incase nil value is sent, error will be returned.
Example:
type user struct { ID int `json:"id" sql:"auto_increment"` Name string `json:"name" sql:"not_null"` Age int `json:"age"` IsEmployed bool `json:"isEmployed"` }
-
Added support for directory operations in FileSystem
Supported functionalities are:
ChDir(dirname string) error
- ChDir changes the current directory.
Getwd() (string, error)
- Getwd returns the path of the current directory.
ReadDir(dir string) ([]FileInfo, error)
- ReadDir returns a list of files/directories present in the directory.
Stat(name string) (FileInfo, error)
- Stat returns the file/directory information in the directory.
🛠 Enhancements
-
Error logs for invalid configs
Added validations forREQUEST_TIMEOUT
andREMOTE_LOG_FETCH_INTERVAL
configs and log error if invalid. -
Error logs for internal server errors
Previously, if any occurred then there was a log with just status code and correlationID.
Hence, added an error log with correlationID anderror message
.
-
FileSystem mock methods for testing purpose
To help test the FileSystem methods, mocks have now been added to mock container struct which can be generated fromNewMockContainer
method.
🐞 Fixes
-
Resolved application status in case of migration failure
If any occurs while running the migrations, the application will now gracefullyshutdown
. -
Resolved response for error case
For an error case, where the response consists of error message only and has no data to return, then also the data field was present in output asnull
.
This has been removed now, so only error struct will be returned with respective status code.
v1.17.0
Release v1.17.0
✨ Features
-
Added support for FTP as an external datasource
FTP can be added using the method on gofrAppAddFTP(fs file.FileSystemProvider)
Supported functionalities are:
Create(name string) (File, error)
Mkdir(name string, perm os.FileMode) error
MkdirAll(path string, perm os.FileMode) error
Open(name string) (File, error)
OpenFile(name string, flag int, perm os.FileMode) (File, error)
Remove(name string) error
RemoveAll(path string) error
Rename(oldName, newName string) error
-
Cassandra now supports Batching
Added Batch functionality with newly introduced methods:
NewBatch(batchType int) error
BatchQuery(stmt string, values ...interface{})
ExecuteBatch() error
-
Automated injection of
gofr.Context
in the gRPC server during registering of the gRPC service
gRPC can now inject gofrcontainer
to theServer
struct, to access logger, datasources, and other functionalities provided by gofr.
Refer to example for detailed info.
🛠 Enhancements
- Messages can now be written to WebSocket without returning
Added method on gofrContextWriteMessageToSocket(data any) error
to write a message.
🐞 Fixes
-
Resolved panic for EnableBasicAuth
If an odd no. of arguments (user, password) were passed to theEnableBasicAuth
method, the app panicked. Fixed this issue, user and password can be passed in the method as comma-separated pairs like:
EnableBasicAuth(user1, pass1, user2, pass2)
-
Resolved authentication for EnableBasicAuth
Even if the credentials were correct, the app was returning the401
status Unauthorised instead of the expected200
. -
Fixed unstructured log in Mongo
Debug query logs were not properly formatted for Mongo, fixed the formatting.
Themessage
field in logs wasstring
type, updated it toobject