This repository contains the spark_framework package. The package contains many useful functions that simplify the life for PySpark developer.
The purposes are:
- to make the PySpark code looking in more pythonic style
- to implement many things in more optimal way. E.g. calculation of medians, quantiles, etc., which works too slowly in default Spark implementation.
So, by using spark_framework you will make your code more compact, easy to read and more robust.
The package was implemented from scratch by the author for three times for different customers, until the last customer gave full permission to expose it to the open source world. The author is a data scientist in present and big data engineer in the past. While doing daily data science work, function after function was added to the package to be re-used in many PySpark notebooks.
This is just initial commit to github, the author plans to extend the code of the demo notebook in the near future and to add documentation.