-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/dockerfile #156
Feature/dockerfile #156
Conversation
JaCoCo agent module code coverage report - spark:2 - scala 2.12.18
|
JaCoCo server module code coverage report - scala 2.13.11
|
JaCoCo agent module code coverage report - scala 2.12.18
|
@@ -23,9 +23,26 @@ on: | |||
types: [ opened, synchronize, reopened ] | |||
|
|||
jobs: | |||
test: | |||
build-agent-and-model: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like the way you split it. However, I read, in https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions#jobs, that
by default, jobs have no dependencies and run in parallel with each other
and I'm wondering how isolated it will be - i.e. if at the same time the runner would run build for scala 2.11 for the agent and also 2.13 for server - would there be a potential clash?
https://github.com/orgs/community/discussions/26769#discussioncomment-3253321
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My assumption is that this is the same runs-on: ubuntu-latest
but probably it will be different instance of this OS. (But...I remember that there is some magic caching potentially happening...we'll see :D )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems to work fine :).
.github/workflows/docker_image.yml
Outdated
docker run -d -p 8080:8080 absaoss/atum-service:latest | ||
docker container ls | ||
sleep 15 # Wait for the service to start | ||
curl -Lf http://127.0.0.1:8080/docs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess that if cURL fails, this step fails?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am going to remove this file entirely for now. It turns out that you can't establish a connection between a container running in one of the steps of your pipeline and the postgres service. Basically we would have to first deploy the image some place and then fetch it in the pipeline from registry in order to be able to start it alongside the postgres service in the same network.
# log filter | ||
filter { | ||
# rootLevel sets the minimum level of log messages that will be displayed | ||
rootLevel = INFO |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this not used at all? I made an assumption that it's loaded thanks to this:
private val configProvider: ConfigProvider = TypesafeConfigProvider.fromResourcePath()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have replaced this logging config section by logback.xml as I have noticed the mappings were not working so we could configure only log level of our own classes. Now there is SLF4J used with logback logging backend. We can talk more about it on a call if you want.
server/src/main/scala/za/co/absa/atum/server/model/PlayJsonImplicits.scala
Outdated
Show resolved
Hide resolved
@@ -32,4 +33,7 @@ object PartitioningForDB { | |||
|
|||
PartitioningForDB(keys = allKeys, keysToValuesMap = mapOfKeysAndValues) | |||
} | |||
|
|||
implicit val writes: Writes[PartitioningForDB] = Json.writes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm potentially we might need reads as well, relatively soon?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will always need both reads/writes for types used by endpoints. Here we need only writes as we don't receive this via rest call but rather construct ourselves.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, really good piece of work!! Reviewed, pulled, built, consulted, tested together, ... I think it can go :)
Introduced new Dockerfile for the Atum server. Upgraded to Java 11. Http4s server can also handle both http and https communication as SSL context was added to the application. Fixed problems with logging by introducing slf4j with logback logging backend.
Closes #151
Closes #154