Skip to content

Releases: nMoncho/helenus

Helenus v0.6.0

27 Dec 09:06
Compare
Choose a tag to compare

Release Description

Release v0.6.0 includes:

  • Make DSE TypeCodecs implicitly available
  • Allow field/column mapping in RowMapper
  • Enable ColumnMapper derivation if a TypeCodec is available implicitly.

DSE TypeCodecs implicitly available

DSE has geometry TypeCodecs such as: Point and LineString. This release makes them implicitly available.

Allow field/column mapping in RowMapper

Sometimes users need to map a field name to a different column name, one that isn't from Camel Case to Snake Case. This release introduces NamedRowMapper as a means to do this:

case class RenamedRow(id: UUID, leeftijd: Int, naam: String)
  
object RenamedRow {
    implicit val rowMapper: RowMapper[RenamedITRow] =
        NamedRowMapper[RenamedITRow]("leeftijd" -> "age", "naam" -> "name")
}

This mapping for a table that looks like

CREATE TABLE IF NOT EXISTS some_table(
   id     UUID,
   age    INT,
   name   TEXT,
   PRIMARY KEY (id, age)
);

Only columns that have to be renamed need to be specified.

Enable ColumnMapper derivation if a TypeCodec is available implicitly

This release allows to derive a ColumnMapper if a TypeCodec is available implicitly. This was disallowed before due to a design bug. Unfortunately we'd to remove the implicit auto-derivation for UDT codecs.

Helenus v0.5.0

20 Dec 15:25
Compare
Choose a tag to compare

Release Description

Release v0.5.0 includes:

  • MappingCodec helper method
  • Use RowMapper directly on Rows.
  • Allow custom column mapping when using a RowMapper.

MappingCodec helper method

DSE's MappingCodec allows users to map elements from an unsupported type into a supported one. For example, if we assume Cassandra doesn't support Enums, we could map their names into TEXT columns. We would subclass this abstract class, thus we decided to include a helper method that takes advantage of the implicit codecs:

import net.nmoncho.helenus._

val codec: TypeCodec[String] =
    Codec.mappingCodec[String, MyEnum](enum => enum.toString, str => MyEnum.withName(str))

Use RowMapper directly on Rows

The extension methods that interact with a RowMapper work on top of ScalaPreparedStatement, ResultSet, AsyncResultSet, or ReactiveResultSet. We decided to add a similar extension method on top of Row. This is to allow users to leverage from RowMapper when having a collection of rows, without having to change how they query.

Custom column mapping when using a RowMapper

Some fields cannot be mapped one-to-one to a single column, for example when a complex field is not a UDT. We've introduce the concept of ColumnMapper to give more control to the use on how to map field (we'd to rename the previous column mapper to ColumnNamingScheme). The way this work is as follows:

case class Address(street: String, number: Int, zipCode: String)
case class Person(firstName: String, lastName: String, address: Address)

implicit val namingScheme: ColumnNamingScheme = SnakeCase
implicit val addressColumnMapper: ColumnMapper[Address] = new ColumnMapper[Address] {
  override def apply(columnName: String, row: Row): Address = Address(
    row.getString("addr_street"),
    row.getInt("addr_number"),
    row.getString("addr_zip_code")
  )
}
implicit val personRowMapper: RowMapper[Person] = RowMapper[Person]

This would work for a table with the following structure:

CREATE TABLE people(
  first_name         TEXT,
  last_name          TEXT,
  addr_street        TEXT,
  addr_number        INT,
  addr_zip_code      TEXT
)

Helenus v0.4.0

15 Dec 13:47
Compare
Choose a tag to compare

Release Description

Release v0.4.0 includes:

  • Akka Streams support.
  • Map rows into arbitrary case classes, when executing a sync, async, or reactive query.
  • Map rows into arbitrary tuples, when executing an async query.

Akka Streams Support

One of the main motivations behind this project is easy integration with other tools connecting against Cassandra. This release allows users to combine Helenus and Akka Streams. By introducing the following operations, you can create Sources, Flows, and Sinks:

  • asReadSource: A ScalaPreparedStatement can be promoted to a Source. By itself it will return a Source[Row, NotUsed], but by using as[T], you can plug a valid RowMapper
  • asWriteSink or asWriteFlow: A ScalaPreparedStatement can be promoted to a Sink or to a Flow, respectively.
import net.nmoncho.helenus._
import net.nmoncho.helenus.akka._

val query: Source[(String, Int, Int), NotUsed] = "SELECT * FROM population_by_country WHERE country = ? AND age >= ?"
   .toCQL
   .prepare[String, Int]
   .as[(String, Int, Int)]
   .asReadSource

val query: Sink[(String, Int, Int), Future[Done]] = "INSERT INTO population_by_country(country, age, amount) VALUES (?, ?, ?)"
   .toCQL
   .prepare[(String, Int, Int)]
   .asWriteSink

Similar operations are available for batched inserts, and for flows with context.

Map Rows as Arbitrary Case Classes

On the previous release we added the ability to map the outcome of queries, that is rows, into arbitrary tuples. With this release we can make these rows into case classes:

case class Population(country: String, age: Int, amount: Int)
val query = "SELECT * FROM population_by_country WHERE country = ? AND age >= ?"
   .toCQL
   .prepare[String, Int]
   .as[Population]

val rowsAsObjects: List[Population] = query.execute(countryId, age).to(List)

Helenus v0.3.0

22 Oct 13:11
Compare
Choose a tag to compare

Release Description

Release v0.3.0 includes:

  • Extract query results as Scala Tuples

Query Results

So far we didn't have an idiomatic way of getting query results out of an executed BoundStatement. Now we have to ways:

val query = "SELECT * FROM population_by_country WHERE country = ? AND age >= ?"
   .toCQL
   .prepare[String, Int]

val rowsAsTuples = query(countryId, age)
  .execute()
  .as[(String, Int, Int)]
  .to(List)

Here we're extending Cassandra's PagingIterable to map from Rows into Tuples. While this could allow for better integration, since BoundStatements are decoupled from their execution, and from the PagingIterables they produce, this can can be achieved with less steps, at the cost of that potential integration:

val query = "SELECT * FROM population_by_country WHERE country = ? AND age >= ?"
   .toCQL
   .prepare[String, Int]
   .as[(String, Int, Int)]

val rowsAsTuples = query.execute(countryId, age).to(List)

Notice we are no longer treating query as a function, and we handover the query parameters directly to execute.

Full Changelog: v0.2.1...v0.3.0

Helenus v0.2.0

22 Oct 12:56
Compare
Choose a tag to compare

Release Description

Release v0.2.0 includes:

  • Introduce IterableCodec
  • Remove unnecessary traversal on collection codecs.
  • Improve collection codec performance, introducing mutable local state
  • Allow UDTs and Case Classes to have different field order and naming

IterableCodec

Just like Scala Collections, we introduce a IterableCodec to address common code used for collections such as: List, Seq, Set, and Vector.

Unfortunately Map doesn't share the same super class. Maybe something to do in the future.

Improve Collection Performance

As noted by the micro-benchmarks, using Scala Collections is slower that its Java counterpart, sometimes as much as 2x slow. This release makes that difference smaller.

UDT and Case Classes

As noted in the previous release, UDT and Case Classes had the limitation that had to have the same field order definition. A valid mapping would be:

// CREATE TYPE ice_cream(name TEXT, numCherries: INT, cone: BOOLEAN)

@Udt("store", "ice_cream")
case class IceCream(name: String, numCherries: Int, cone: Boolean)

val codec: TypeCodec[IceCream] = Codec.udtOf[IceCream]

This is no longer necessary, we can now define them with a different order or naming scheme:

// CREATE TYPE ice_cream(name TEXT, code: BOOLEAN, num_cherries: INT)

@Udt("store", "ice_cream")
case class IceCream(name: String, numCherries: Int, cone: Boolean)

implicit val colMapper: ColumnMapper = SnakeCase
val codec: TypeCodec[IceCream] = Codec.udtFrom[IceCream](session)

This requires a CqlSession as parameter for calculating the mapping between fields.

Full Changelog: v0.1.1...v0.2.0

Helenus v0.1.0

22 Oct 12:39
Compare
Choose a tag to compare

Release Description

Initial Helenus release, including

  • Type Codecs
  • CQL interpolation
  • Micro-benchmarks
  • Docs powered by MDoc

Type Codecs

AnyVals

Avoids any kind of annotation against Java types, like:

val age = 42
val pstmt = session.prepare("SELECT * FROM people WHERE age > ?")

pstmt.bind(age: java.lang.Integer)

Collections

Doesn't require users converting back and forth between Java and Scala Collections. Where before you would use CollectionConverters like:

import scala.jdk.CollectionConverters._

val pstmt = session.prepare("SELECT cast FROM movies WHERE id = ?")

session
  .execute(pstmt.bind("Batman"))
  .one()
  .getList(0, classOf[Cast])
  .asScala // <-- this converts a java.util.List into a scala.collection.mutable.ListBuffer

Now you get immutable collections by default, without any conversion:

import scala.jdk.CollectionConverters._

val pstmt = session.prepare("SELECT cast FROM movies WHERE id = ?")

session
  .execute(pstmt.bind("Batman"))
  .one()
  .get(0, TypeCodecs.listOf(classOf[Cast])) // <-- this gives an immutable List

The same thing goes for Option, no need to convert from Optional.

Tuples

Native integration between Scala Tuples and Cassandra Tuples, where before you'd have to define a TupleType:

val tupleType = new DefaultTupleType(
  List(DataTypes.INT, DataTypes.TEXT).asJava,
  attachmentPoint
)
val scalaTuple = (42, "Batman")

val tuple = tupleType
  .newValue()
  .set(0, scalaTuple._1)
  .set(1, scalaTuple._2)

Now you can use Scala Tuples as bound parameters, and query projections.

UDTs and Case Classes

Like with tuples, no need to convert back and forth, you can use case classes as UDT.

For this you only need to annotate the class with the @Udt annotation:

@Udt("tests", "ice_cream")
case class IceCream(name: String, numCherries: Int, cone: Boolean)

Right now case classes are limited to have the same definition order as the CQL Type. For example, for IceCream the correct type is: CREATE TYPE ice_cream(name TEXT, numCherries INT, cone BOOLEAN). Names aren't important, just order and type.

Query Cassandra

There are two ways to query Cassandra, with CQL interpolation, and with String Extension methods

CQL Interpolation

Treat String interpolation as CQL statements:

def queryMovies(name: String): BoundStatement =
  cql"SELECT * FROM movies WHERE name = ${name}"

We have the intuition that this kind of query can be a bit slow, due to how interpolation is implemented. Work to measure and improve this will be done in the future.

String Extension Methods

Prepare regular String as CQL statements, and then treat that result a functions that output BoundStatements:

val query = "SELECT * FROM movies WHERE name = ?".prepare[String]

query("Batman")
query("Batman Returns")

Micro Benchmarks

Some TypeCodecs have their benchmarks to compare the implementation against Datastax's.

What's Changed

  • feature: Add Scala Codecs, CQL interpolation, Microbenchmarks, and MDoc by @nMoncho in #1

Full Changelog: https://github.com/nMoncho/helenus/commits/v0.1.0