-
Notifications
You must be signed in to change notification settings - Fork 157
TIBCO JasperReports® Server deployment in OpenShift
Table of Contents
TIBCO JasperReports® Server application should be able to run on any OpenShift container platform by using the scripts under js-docker.
- Docker-engine (19.x+) setup with Docker Compose (3.9+)
- OpenShift container platform
- TIBCO JasperReports® Server
- KeyStore
- Git
- Helm 3.5
- Minimum Knowledge of Docker, K8s, and OpenShift
helm repo add haproxytech https://haproxytech.github.io/helm-charts
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add elastic https://helm.elastic.co
These parameters and values are the same as parameters in K8s/jrs/helm/values.yaml.
Parameter | Description | default Value |
---|---|---|
replicaCount | Number of pods | 1 (It will not come into effect if autoscaling is enabled.) |
jrsVersion | TIBCO JasperReports® Server release version | 7.9.0 |
image.tag | Name of the TIBCO JasperReports® Server webapp image tag | TIBCO JasperReports® Server Release Version |
image.name | Name of the TIBCO JasperReports® Server webapp image | null |
image.pullPolicy | Docker image pull policy | IfNotPresent |
image.PullSecrets | Name of the image pull secret | Pull secret should be created manually before using it in same namespace, See Docs |
nameOverride | Override the default helm chart name | jasperserver-pro |
fullnameOverride | Override the default full chart name | Override the full name |
secretKeyStoreName | Name of the keystore secret | jasperserver-keystore |
secretLicenseName | Name of the license secret | jasperserver-license |
serviceAccount.enabled | Service account for jasperserver webapp | true |
serviceAccount.annotations | Adds new annotations | null |
serviceAccount.name | Name of the service account | jasperserver-pro |
rbac.create | Creates role and role binding | true |
rbac.name | Name of the jasperserver role and role binding | jasperserver-role |
podAnnotations | Adds pod annotations | null |
securityContext.capabilities.drop | Drops Linux capabilites for the jasperserver webapp | All |
securityContext.runAsNonRoot | Runs the jasperserver webapp as non root user | true |
securityContext.runAsUser | User id to run the jasperserver webapp | 10099 |
buildomatic.enabled | Installs or skips the TIBCO JasperReports® Server repository DB | true |
buildomatic.name | Name of the jasperserver command line tool | jasperserver-buildomatic |
buildomatic.imageTag | Buildomatic image tag | Same as TIBCO JasperReports® Server release version |
buildomatic.imageName | Name of the buildomatic image | null |
buildomatic.pullPolicy | Image pull policy | IfNotPresent |
buildomatic.PullSecrets | Image pull secrets | null |
buildomatic.includeSamples | Installs TIBCO JasperReports® Server samples in TIBCO JasperReports® Server DB | true |
extraEnv.javaopts | Adds all JAVA_OPTS | -XMX3500M |
extraEnv.normal | Adds all the normal key value pair variables | null |
extraEnv.secrets | Adds all the environment references from secrets or configmaps | null |
extraVolumeMounts | Adds extra volume mounts | null |
extraVolumes | Adds extra volumes | null |
Service.type | TIBCO JasperReports® Server service type | ClusterIP (for now, we kept as NodePort for internal testing) |
Service.port | Service port | 80 |
healthcheck.enabled | Checks TIBCO JasperReports® Server pod health status | true |
healthcheck.livenessProbe.port | Jasperserver container port | 8080 |
healthcheck.livenessProbe.initialDelaySeconds | Initial waiting time to check the health and restarts the jasperserver Webapp pod | 350 |
healthcheck.livenessProbe.failureThreshold | Threshold for health checks | 10 |
healthcheck.livenessProbe.periodSeconds | Time period to check the health | 10 |
healthcheck.livenessProbe.timeoutSeconds | Timeout | 4 |
healthcheck.readinessProbe.port | Jasperserver container port | 8080 |
healthcheck.readinessProbe.initialDelaySeconds | Initial delay before checking the health checks | 90 |
healthcheck.readinessProbe.failureThreshold | Threshold for health checks | 15 |
healthcheck.readinessProbe.periodSeconds | Time period to check the health checks | 10 |
healthcheck.readinessProbe.timeoutSeconds | Timeout | 4 |
resources.enabled | Enables the minimum and maximum resources used by TIBCO JasperReports® Server | true |
resources.limits.cpu | Maximum CPU | "3" |
resources.limits.memory | Maximum Memory | 7.5Gi |
resources.requests.cpu | Minimum CPU | "2" |
resources.requests.memory | Minimum Memory | 3.5Gi |
jms.enabled | Enables the ActiveMQ cache service | true |
jms.name | Name of the JMS | jasperserver-cache |
jms.serviceName | Name of the JMS Service | jasperserver-cache-service |
jms.imageName | Name of the Activemq image | rangareddyv/activemq-openshift |
jms.imageTag | Activemq image tag | 5.16.2 |
jms.healthcheck.livenessProbe.port | Container port | 61616 |
jms.healthcheck.livenessProbe.initialDelaySeconds | Initial delay | 100 |
jms.healthcheck.livenessProbe.failureThreshold | Threshold for health check | 10 |
jms.healthcheck.livenessProbe.periodSeconds | Time period for health check | 10 |
jms.healthcheck.readinessProbe.port | Container port | 61616 |
jms.healthcheck.readinessProbe.initialDelaySeconds | Initial delay | 10 |
jms.healthcheck.readinessProbe.failureThreshold | Threshold for health check | 15 |
jms.healthcheck.readinessProbe.periodSeconds | Time period for health check | 10 |
jms.securityContext.capabilities.drop | Linux capabilities to drop for the pod | All |
ingress.enabled | Work with multiple pods and stickyness | false |
ingress.annotations.ingress.kubernetes.io/cookie-persistence | "JRS_COOKIE" | |
ingress.hosts.host | Adds valid DNS hostname to access the TIBCO JasperReports® Server | null |
ingress.tls | Adds TLS secret name to allow secure traffic | null |
autoscaling.enabled | Scales the TIBCO JasperReports® Server application, Note: Make sure metric server is installed or metrics are enabled | false |
autoscaling.minReplicas | Minimum number of pods maintained by autoscaler | 1 |
autoscaling.maxReplicas | Maximum number of pods maintained by autoscaler | 4 |
autoscaling.targetCPUUtilizationPercentage | Minimum CPU utilization to scale up the application | 50% |
autoscaling.targetMemoryUtilizationPercentage | Minimum memory utilization to scale up the TIBCO JasperReports® Server applications | null |
metrics.enabled | Enables the metrics to display all the metrics | false |
kube-prometheus-stack.prometheus-node-exporter.hostRootFsMount | false | |
kube-prometheus-stack.grafana.service.type | Grafana service type | NodePort |
logging.enabled | Enables the centralized logging setup | false |
elasticsearch.volumeClaimTemplate.resources.requests.storage | 10Gi | |
kibana.service.type | Kibana service type | NodePort |
elasticsearch.replicas | Number of replicas for Elasticsearch | 1 |
tolerations | Adds the tolerations as per K8s standard if needed | null |
affinity | Adds the affinity as per K8s standards if needed | null |
-
Go to
js-docker/K8s
, and to update the dependency charts, runhelm dependencies update jrs/helm
. -
Update the default_master.properties in
Docker/jrs/resources/default_properties
as needed.Note: Update the dbhost in
Docker/jrs/resources/default-propertes/default_master.properties
with the namerespository-postgresql.<k8s-namespace>.svc.cluster.local
to create DB in K8s cluster. -
Build the docker images for TIBCO JasperReports® Server see the Docker TIBCO JasperReports® Server readme and push it to your docker registry.
-
Generate the keystore and copy it to the
K8s/jrs/helm/secrets/keystore
folder, see the key Store generation section in TIBCO JasperReports® Server readme. -
Copy the TIBCO JasperReports® Server license to the
K8s/jrs/helm/secrets/license
folder.
By default, TIBCO JasperReports® Server will install using activemq docker image. You can disable it by changing the parameter jms.enabled=false
.
External JMS instance can also be used instead of in-build JMS setup by adding the external jms url jms.jmsBrokerUrl
. You can access it by using tcp port, for instance tcp://<JMS-URL>:61616
.
- To set up the Repository DB in K8s cluster, run the below command. For this, we are using bitnami/postgresql Helm chart. See the Official Docs to configure the DB in cluster mode.
helm install repository bitnami/postgresql --set postgresqlPassword=postgres --namespace jrs --create-namespace
-
Check the pods status and make sure pods are in a running state.
-
If Namespace already exists, remove
--create-namespace
parameter. -
Go to
js-docker/k8s
and update the jrs/helm/values.yaml, see the Parameter section for more information. -
Set buildomatic.enabled=true for repository setup. By default samples are included, if it is not required, set buildomatic.includeSamples=false
-
Run the below command to install TIBCO JasperReports® Server and repository setup.
helm install jrs jrs/helm --namespace jrs --wait --timeout 20m0s
Note: If Repository Setup fails in the middle then increase the timeout.
helm install jrs jrs/helm --namespace jrs --create-namespace
Note:
- If DB is not able to create using bitnami/postgresql Helm chart due to Persistent volume issues then create a Postgresql container from OpenShift console under Database section.
- It is always recommended to create an external database
By default, accessing the application from outside the cluster is disabled. To access the application from outside the OpenShift cluster, create a OpenShift Route's.
-
It can be created directly from OpenShift console and select the jasperserver-pro service or
-
It can also be created from oc command line tool.
oc expose svc/<jasperserver-service-name>
-
TIBCO JasperReports® Server application requires stickiness, to get the stickiness from the Route, run the below command from the oc command line tool.
oc annotate route <route-name> annotation:router.openshift.io/cookie_name=JRS_COOKIE
-
For securing the route see the OpenShift Official docs.
- If repository setup fails due to password authentication, remove the PostgreSQL helm chart, and the persistent volume claim and then re-install the PostgreSQL chart by setting the password
--set postgresqlPassword=postgres
. - If you encounter an issue with deployment due to keystore issues, check the Keystore Generation steps to resolve this issue.