Releases: nMoncho/helenus
Helenus v1.8.1
Helenus v1.8.0
Helenus v1.7.0
v1.7.0 (2024-12-08)
Features
- Add Monix support (8fac8 Gustavo De Micheli)
- Add
withOptions
extension method toFuture[ScalaPreparedStatement[In, Out]]
(fa91a Gustavo De Micheli) - Add
UnifiedUDTCodec
to bring IdenticalUDTCodec and NonIndenticalUDTCodec. (700fb Gustavo De Micheli)
Monix Support
This version includes integration with Monix. This is done with Observables for data sources, and Consumers for data sinks.
As with other integrations we start from a CQL query:
val query = "SELECT * FROM ice_creams".toCQL.prepareUnit.as[IceCream].asObservable()
val consumer = "INSERT INTO ice_creams(name, numCherries, cone) VALUES(?, ?, ?)".toCQL
.prepare[String, Int, Boolean]
.from[IceCream]
.asConsumer()
Both sync and async API are supported.
Unified UDT Codecs
On previous releases we offered 2 ways of creating TypeCodec
s for UDTs: udtOf
and udtFrom
. These offered different ways of mapping a case class to a UDT, depending if the fields of both were aligned or not. This required implementation knowledge on both sides. We always thought this was bad design, but the library evolved that way.
After we made the release for Scala 3, where we wanted to support using the derives
keyword, we developed a way to have a single way to create these TypeCodec
s. This feature is back-ported to its Scala 2 counterpart.
We still support users who want to specify the specific implementation with two new methods, which make evident the implementation. Also in preparation for Helenus 2.x we deprecated udtOf
and udtFrom
.
To use the new methods:
case class IceCream(name: String, numCherries: Int, cone: Boolean)
// these codecs methods also supports overriding "keyspace", "name", and "frozen"
val unified = Codec.of[IceCream]()
val identical = Codec.identicalUdtOf[IceCream]()
val nonIdentical = Codec.nonIdenticalUdtCodecOf[IceCream]()
Bug Fixes
- Replace
addOne
onFactory.Builder
for+=
operator failing on Scala 2.12 (96ef0 Gustavo De Micheli) - Properly
accept
DataType when using Identical and NonIdentical UDT Codecs (9bb8b Gustavo De Micheli)
Other changes
Release v1.7.0
c090f Gustavo De Micheli 2024-12-08 18:10:17
Update logback-classic to 1.5.12
bf943 Scala Steward 2024-11-10 17:04:12
Update sbt-ci-release to 1.9.0
65aeb Scala Steward 2024-11-10 17:03:54
Update pekko-stream, pekko-testkit to 1.1.2
9b8af Scala Steward 2024-11-10 17:03:37
Update mockito-core to 5.14.2
f3d9a Scala Steward 2024-11-10 17:03:20
Update scala-library, scala-reflect to 2.12.20
9fa89 Scala Steward 2024-11-10 17:03:05
Update mdoc, sbt-mdoc to 2.6.1
8c3c0 Scala Steward 2024-09-30 07:33:59
Update scala-library, scala-reflect to 2.13.15
411ff Scala Steward 2024-09-30 07:33:39
Update sbt-scalafix to 0.13.0
40ba4 Scala Steward 2024-09-30 07:33:17
Update mockito-core to 5.14.0
1dafe Scala Steward 2024-09-30 07:33:04
Update pekko-stream, pekko-testkit to 1.1.1
794c1 Scala Steward 2024-09-17 09:34:36
Update jna to 5.15.0
59248 Scala Steward 2024-09-17 09:34:17
Update scalacheck to 1.18.1
8b1eb Scala Steward 2024-09-17 09:34:02
Update mdoc, sbt-mdoc to 2.6.0
04d06 Scala Steward 2024-09-17 08:59:47
Update pekko-stream, pekko-testkit to 1.1.0
5bedd Scala Steward 2024-09-06 07:12:30
Update mockito-core to 5.13.0
760e6 Scala Steward 2024-08-30 15:45:26
Update sbt-mima-plugin to 1.1.4
5c1b3 Scala Steward 2024-08-22 07:24:02
Update slf4j-api to 2.0.16
da350 Scala Steward 2024-08-22 07:23:47
Update logback-classic to 1.5.7
d3e5a Scala Steward 2024-08-22 07:23:32
Update sbt-ci-release to 1.6.1
380f8 Scala Steward 2024-08-22 07:23:17
Update scalafmt-core to 3.8.3
15993 Scala Steward 2024-07-30 07:25:05
Update flink-connector-base, flink-core, ... to 1.18.1
8cc25 Scala Steward 2024-07-11 09:43:10
Update pekko-stream, pekko-testkit to 1.0.3
a9bd0 Scala Steward 2024-07-11 09:42:54
Update scalatest to 3.2.19
589ce Scala Steward 2024-07-11 09:41:28
Update mdoc, sbt-mdoc to 2.5.4
1cc56 Scala Steward 2024-07-11 09:41:07
Helenus v1.6.1
v1.6.0
Helenus v1.6.0 (2024-06-11)
Features
Flink Experimental Support
On this release we add support for Apache Flink. You can use Helenus as either a Source
or a Sink
.
The way this works is a bit different from Akka/Pekko, or just plain Helenus. Since the statements will be ran in different nodes, these have to be late prepared. This means that we provide a thunk to Flink, instead of an already prepared statement.
For example, if we want to define a Source, we can do:
val query = (session: CqlSession) => "SELECT * FROM hotels".toCQL(session).prepareUnit.as[Hotel].apply()
val input: DataStream[Hotel] = env.fromSource(
query.asSource(CassandraSource.Config()),
WatermarkStrategy.noWatermarks(),
"Cassandra Source"
)
Notice that we must provide a thunk of the type: CqlSession => ScalaBoundStatement[Out]
. We don't provide a ScalaPreparedStatement
instead since query parameters cannot be bound during parallel execution.
We can also provide the previous snippet in curried form:
val input: DataStream[Hotel] = env.fromSource(
"SELECT * FROM hotels".toCQL(_)
.prepareUnit
.as[Hotel]
.apply()
.asSource(CassandraSource.Config()),
WatermarkStrategy.noWatermarks(),
"Cassandra Source"
)
The current implementation is heavily influenced by Flink's Cassandra Connector. After we get more experience and feedback, we'll probably adjust and improve our implementation.
This was implemented in this commit:
- Add Flink Source/Sink for DataStream and DataSet (9dbb3 Gustavo De Micheli)
Ignoring null
values
By default Helenus ignores null
values provided to ScalaPreparedStatement
s. The way this works is that after a statement is bound, null values are just skipped from the bind parameters.
Nonetheless sometimes users may want to insert null
into a row (e.g. they are updating a row and want to set that column as empty).
To this purpose you can use the withIgnoreNullFields
method to decided if you want to ignore null fields or not (true
by default):
"INSERT INTO hotels(id, name, phone, address, pois) VALUES (?, ?, ?, ?, ?)".stripMargin.toCQL
.prepare[String, String, String, Address, Set[String]]
.withIgnoreNullFields(ignore = false)
This was implemented in these two commits:
- Allow
prepareFrom
queries to ignore 'null/None' through options (441a8 Gustavo De Micheli) - Allow users to choose if 'null/None' fields are ignored or not when binding parameters to a statement. (fab1d Gustavo De Micheli)
Add oneOption
extension method as convenience for queries with a single result
In previous releases we added the nextOption
method which would return an
Future[Option[(T, MappedAsyncPagingIterable)]]
. This was done to provide a similar
API to Pager, in which the client code could iterate over results while keeping
track of the mutated paging iterable, which would change when fetching the next page.
An example can be seen on the tests:
result <- "SELECT * FROM hotels".toCQLAsync.prepareUnit
.as[Hotel]
.map(_.withPageSize(2))
.executeAsync()
Some((hotelA, iteratorA)) <- result.nextOption()
Some((hotelB, iteratorB)) <- iteratorA.nextOption() // we use 'iteratorA', not 'result'
Some((hotelC, iteratorC)) <- iteratorB.nextOption()
Some((hotelD, iteratorD)) <- iteratorC.nextOption()
Some((hotelE, iteratorE)) <- iteratorD.nextOption()
lastResult <- iteratorE.nextOption()
This is cumbersome when we want only the first result, for example when
we use a LIMIT 1
in the query, which is where oneOption
becomes useful:
result <- "SELECT * FROM hotels LIMIT 1".toCQLAsync.prepareUnit
.as[Hotel]
.executeAsync()
firstResult = result.oneOption // this should be Some
nextResult = result.oneOption // this will always be None
oneOption
is no longer providing a Future
but only an Option
as the
page is already fetched.
This was implemented in this commit:
- Add
oneOption
extension method to MappedAsyncPagingIterable. (e7878 Gustavo De Micheli)
Other Features
- Add extension methods to
Future[ScalaBoundStatement[Out]]
like the synchronous counterpart had: (f0d44 Gustavo De Micheli)
Other changes
Update shapeless to 2.3.12
5e46a Scala Steward 2024-05-20 08:25:26
Update mockito-core to 5.12.0
f5ff3 Scala Steward 2024-05-11 17:32:11
Update logback-classic to 1.5.6
546c4 Scala Steward 2024-05-11 10:10:21
Update sbt-scalafix to 0.12.1
7f359 Scala Steward 2024-05-11 10:10:09
Update scalacheck to 1.18.0
b4a90 Scala Steward 2024-05-11 10:09:59
Update scalacheck to 1.17.1
2fe71 Scala Steward 2024-04-19 13:06:41
Update scala-collection-compat to 2.12.0
725b3 Scala Steward 2024-04-19 12:54:52
Update logback-classic to 1.5.5
a1805 Scala Steward 2024-04-14 09:34:43
Update slf4j-api to 2.0.13
43062 Scala Steward 2024-04-14 09:34:23
Update scalafmt-core to 3.8.1
eaceb Scala Steward 2024-04-10 12:54:39
Update logback-classic to 1.5.4
0037c Scala Steward 2024-04-10 12:54:23
Update scalafmt-core to 3.8.0
1a034 Scala Steward 2024-03-01 16:10:53
Helenus v1.5.0
v1.5.0 (2024-02-13)
Features
- Allow RowMapper to handle Either fields encoded as different columns. This provides an alterinative to the EitherCodec where values are encoded in a tuple (27219 Gustavo De Micheli)
- Add integration between Mapped Statements and Akka/Pekko Streams (751ae Gustavo De Micheli)
- Add short-hand methods to prepare and execute 'prepareFrom' on Async statements (02cb4 Gustavo De Micheli)
Other changes
Update mockito-core to 5.10.0
36cd5 Scala Steward 2024-02-03 18:42:39
Update jna to 5.14.0
ff63d Scala Steward 2024-02-03 18:42:08
Update slf4j-api to 2.0.11
0ceb7 Scala Steward 2024-02-03 18:41:42
Update pekko-connectors-cassandra to 1.0.2
fafc1 Scala Steward 2024-02-03 18:41:21
Helenus v1.4.1
Release Description
This release includes:
- Delegate
accepts
to inner codec forOptionCodec
Delegate accepts
to inner codec for OptionCodec
An unnecessary warning would be logged if a Option of an UDT would be used, as the OptionCodec
wouldn't delegate the information properly. This release fixes that.
Helenus v1.4.0
Release Description
This release includes:
- Add
Pager
on Interpolated Queries and ScalaBoundStatements - Provide
PagingState
as Materialized Value - Add Interpolated Statement Queries to Akka/Pekko Streams
Add Pager
on Interpolated Queries and ScalaBoundStatements
In this release users can make use of Pager
from Interpolated Queries and ScalaBoundStatement
s. In previous releases this feature was limited to ScalaPreparedStatement
s.
Provide PagingState
as Materialized Value
In previous releases a Pager
could be executed reactively providing a Publisher[(Pager[Out], Out)]
, where the first tuple element could be used to fetch the next page. This value would be duplicated for every element of the page.
From this release both Akka and Pekko Streams can get the PagingState
as the Materialized Value of the stream. This PagingState
can be used later on to create another Pager
and resume the query execution.
Add Interpolated Statement Queries to Akka/Pekko Streams
This feature is long coming. From this release we can define Interpolated Queries as Akka/Pekko Sources.
Its intended use is to define a query where we get data from. This feature doesn't extend nicely to a Write Sink since bind parameters are bound when the query is constructed, which would only work for one set of values, instead of a ScalaPreparedStatement
where bind parameters are bound for each element in the stream going to the Sink.
Helenus v1.3.1
Release Description
This release includes:
- Verify 'ScalaPrepareStatement' arity
- Iterate future results with 'nextOption'
- Allow to create UDT Codec by defining field order
- Introduce
Pager
- Introduce a
Mapping
to combine an Adapter and a RowMapper
Verify ScalaPrepareStatement
arity
Specifying the same arity of bind parameters and function parameters can be error prone. This release introduces a runtime check that will warn users if the arity if different. It will also check that the expected and the provided type match.
Unfortunately there is no way to provide this information a compile time, like Phantom or Quill does. We believe that the database is the source of truth, so things like this will always have to be checked a runtime.
Iterate future results with nextOption
Iterating results of an asynchronous execution works like pagination. This release introduces a fix that would prevent users from requesting the next page after the first.
Allow to create UDT Codec by defining field order
On previous releases Helenus provided two ways of defining UDT Codecs: udtOf
when case classes fields were defined in the same order as the CQL Type, and udtFrom
when this wasn't the case.
udtFrom
was a bit cumbersome since it required a CqlSession
to build the UDT Mapping. This release allows users to define the order of the fields without requiring a CqlSession
using the udtFromFields
method:
// For the type:
// CREATE TYPE IF ice_cream (name TEXT, num_cherries INT, cone BOOLEAN)
// The fields `cone` and `numCherries` are swapped in respect with the CQL type
case class IceCream(name: String, cone: Boolean, numCherries: Int)
// The second parameter list defines what's the order of the CQL Fields
val codec: TypeCodec[IceCream] =
Codec.udtFromFields[IceCream]("keyspace", "ice_cream", frozen = true)(_.name, _.numCherries, _.cone)
Due to a macro limitation we cannot provide default arguments, and users must define the keyspace, cql type name, and if it's frozen. Nonetheless these parameters can be empty strings and the implementation will choose the correct values from context.
Introduce Pager
This release introduces Pager
, an abstraction that allows users to paginate over query results. You can read more about it in our wiki Pagination
Introduce a Mapping
This release introduces Mapping
, a way to combine an Adapter
and a RowMapper
.
Its intended use is to allow users to have an ORM-like feature where a case class can be used to insert and query values from a table.
This also allows users to overcome the limitation of having statements with up to 22 parameters.
Helenus v1.2.2
Release Description
This release includes:
- Optional elements on tuples now return
None
when empty
Optional elements on tuples now return None
when empty
Just like the previous bugfix, this release takes care of handling null
values on tuples