-
Notifications
You must be signed in to change notification settings - Fork 1
Vitality Getting Started
This page provides a comprehensive guide to setting up, installing, and working on the Vitality project. Vitality is a centralized telemetry system designed to integrate multiple tools, enabling better debugging, performance tracking, and user experience improvements. Follow the instructions below to get started.
- Install PostgreSQL for your OS.
- Install pgAdmin 4 for your OS.
- Install Node.js using nvm (minimum version 22.11.0).
- Install pnpm using:
Minimum version: 9.15.4.
npm i -g pnpm
-
Open your terminal and type
psql
. -
Use
\du
to list existing roles. -
If
your_name
does not exist, create a role using the following command:CREATE ROLE "your_name" LOGIN PASSWORD 'your_password';
Example:
CREATE ROLE "john_dupon" LOGIN PASSWORD 'awesome_password';
π΄ Note: The
postgres
role should exist by default as a superuser.
- Open pgAdmin 4 and create a new server connection.
- Use the following details:
-
Name:
v6y_database
-
Hostname:
localhost
-
Name:
bash: psql: command not found
Fix by adding PostgreSQL to your system's PATH:
export PATH=/path/to/PostgreSQL/bin:$PATH
git clone https://github.com/ekino/v6y.git
cd v6y
pnpm install
This installs dependencies for all monorepo modules.
-
In these directories:
v6y
,v6y-libs/core-logic
,front
,front-bo
,bff
, andbfb-*
, create an.env
file according to theenv-template
file content. -
Refer to GitLab Personal Access Tokens and GitHub Personal Access Tokens for generating tokens.
-
Initialize the database by running the following command from the root folder:
pnpm run init-db
The Frontend is responsible for displaying the user interface of the application, while the Backend for Frontend (BFF) acts as an intermediary layer between the frontend and backend systems. It handles data aggregation and communication with the backend services, ensuring that the frontend receives the required data in an optimized format.
To start these components:
-
Start the Backend for Frontend:
cd v6y-apps/bff pnpm start:dev
-
Start the Frontend:
cd v6y-apps/front pnpm start:dev
π΅ GraphQL Playground:
Access the playground at http://localhost:4001/v6y/graphql
for testing queries and mutations.
The Frontend Back Office (BO) is designed for administrative tasks, providing tools for managing application configurations, user accounts, and other backend settings. Like the frontend, it communicates with the Backend for Frontend (BFF) for optimized data handling.
To start these components:
-
Start the Backend for Frontend:
cd v6y-apps/bff pnpm start:dev
-
Start the Frontend Back Office:
cd v6y-apps/front-bo pnpm start:dev
π΅ GraphQL Playground:
Access the playground at http://localhost:4001/v6y/graphql
for testing queries and mutations.
The BFB Main Analyzer retrieves the list of configured applications from the database. Each application contains a Git repository URL (GitHub/GitLab) and a production URL.
-
Main Analyzer Workflow:
-
The main analyzer checks out the repository as a ZIP file.
-
The ZIP file is extracted to:
v6y-apps/code-analysis-workspace
-
Once the source code is checked out, further analysis does not require the main analyzer.
-
Alternatively, contributors can manually download and extract the ZIP file into
code-analysis-workspace
and directly start the static analyzer.
-
-
Starting Static Analysis:
-
The main analyzer triggers the static analyzer (if required by the application type).
-
If a new analyzer needs to be attached, modify
ApplicationManager.buildApplicationFrontendByBranch
:try { await fetch(staticAuditorApiPath as string, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ applicationId, workspaceFolder }), }); } catch (error) { AppLogger.info( `[ApplicationManager - buildApplicationFrontendByBranch - staticAuditor] error: ${error}` ); } try { await fetch(yourNewStaticAuditorUrl as string, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ applicationId, workspaceFolder }), }); } catch (error) { AppLogger.info( `[ApplicationManager - buildApplicationFrontendByBranch - staticAuditor] error: ${error}` ); }
-
Each analyzer must have its own try-catch to avoid failures from blocking others.
-
-
Dynamic Analysis Requires the Main Analyzer:
-
Dynamic analyzers run on production URLs and require real-time data.
-
To attach a new dynamic analyzer, update
ApplicationManager.buildDynamicReports
:const buildDynamicReports = async ({ application }: BuildApplicationParams) => { AppLogger.info('[ApplicationManager - buildDynamicReports] application: ', application?._id); if (!application) { return false; } try { await fetch(dynamicAuditorApiPath as string, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ applicationId: application?._id, workspaceFolder: null }), }); } catch (error) { AppLogger.info( `[ApplicationManager - buildDynamicReports - dynamicAuditor] error: ${error}` ); } };
-
Each analyzer must have its own try-catch to avoid failures from blocking others.
-
-
Automated Keyword and Evolution Management:
-
The system automatically updates keywords and evolutions based on audit results.
-
No manual changes are required to insert keywords or track evolutions.
-
-
Ensuring Each Analyzer Runs in Isolation:
-
Each analyzer should run inside a worker process to prevent blocking the main thread:
await forkWorker('./src/workers/LighthouseAnalysisWorker.ts', workerConfig); await forkWorker('./src/workers/CodeQualityAnalysisWorker.ts', workerConfig); await forkWorker('./src/workers/DependenciesAnalysisWorker.ts', workerConfig);
-
π΅ It's also possible to start any task / script
from the main package.json:
...
"scripts": {
"start:frontend": "nx run @v6y/front:start",
"start:frontend:bo": "nx run @v6y/front-bo:start",
"start:bff": "nx run @v6y/bff:start",
"start:bfb:main": "nx run @v6y/bfb-main-analyzer:start",
"start:bfb:static": "nx run @v6y/bfb-static-code-auditor:start",
"start:bfb:dynamic": "nx run @v6y/bfb-url-dynamic-auditor:start",
"start:bfb:devops": "nx run @v6y/bfb-devops-auditor:start",
"start:all": "nx run-many --target=start --all",
"stop:all": "nx run-many --target=stop --all",
"build:tsc": "nx run-many --target=build:tsc --all",
"build": "nx run-many --target=build --all",
"lint": "nx run-many --target=lint --all",
"lint:fix": "nx run-many --target=lint:fix --all --verbose",
"format": "nx run-many --target=format --all",
"verify:code:duplication": "jscpd --config .jscpd.json",
"ts-coverage": "typescript-coverage-report",
"test": "nx run-many --target=test --all",
"init-db": "nx run-many --target=init-db --all",
"nx:analyze:graph": "nx graph",
"nx:analyze:graph:affected": "nx graph --affected",
"nx:clear:cache": "nx reset",
"prepare": "husky"
},
...
The initial database data is critical for the application to function correctly as it provides the foundational data structures and default configurations required for core features. For example, it may include:
- Default Roles and Permissions: Ensuring proper access control mechanisms are in place.
- Configuration Settings: Predefined settings to bootstrap the application.
- Demo or Sample Data: Allows you to test and verify features during development.
-
Import the tar file into your PostgreSQL database.
-
To use the Front, Front-BO, or query the BFF, create a superadmin account in the database. Use the following SQL command to insert the account:
INSERT INTO accounts (username, email, password, role, created_at, updated_at, applications) VALUES ( 'superadmin', '[email protected]', '$2a$10$fSDUAlp4s8gJNc7HtZdMdeevQHAyRgCy6knbL1QQz3pHstXSbWm0W', 'SUPERADMIN', NOW(), NOW(), ARRAY[]::integer[] );
-
Once the superadmin account is created, log in using:
- Email: [email protected]
- Password: superadmin
π΅ Note: The login process will generate an authentication token, which must be included in every request sent to the BFF.
- To create another user account, log in to Front-BO and create additional user accounts with the appropriate roles and privileges.
- Modify ThemeLoader.ts.
export const ThemeTypes = {
ADMIN_DEFAULT: 'admin-default',
APP_DEFAULT: 'app-default',
// Add here your new theme value
};
export const ThemeModes = {
LIGHT: 'light',
DARK: 'dark',
};
/**
* Load theme based on the theme type
* @param theme
*/
export const loadTheme = ({ theme }: ThemeProps) => {
if (theme === ThemeTypes.ADMIN_DEFAULT) {
return AdminTheme;
}
if (theme == ThemeTypes.APP_DEFAULT) {
return AppTheme;
}
// add here your theme case
return {};
};
- Add the necessary variants (e.g., admin, app).

- All theme-specific changes should reside within ui-kit and ui-guide.
- Vitality seamlessly integrates with GitHub and GitLab repositories to fetch repository details, file contents, deployments, and merge requests.
- This interaction is fully centralized in
RepositoryApi.ts
, ensuring consistent API calls across all Vitality applications.
/**
* Builds the configuration for the Github API.
* @param organization
* @constructor
*/
const GithubConfig = (organization: string): GithubConfigType => ({
baseURL: 'https://api.github.com',
api: '',
urls: {
fileContentUrl: (repoName: string, fileName: string) =>
`https://api.github.com/repos/${organization}/${repoName}/contents/${fileName}`,
repositoryDetailsUrl: (repoName: string) =>
`https://api.github.com/repos/${organization}/${repoName}`,
},
headers: {
Authorization: `Bearer ${process.env.GITHUB_PRIVATE_TOKEN}`,
Accept: 'application/vnd.github+json',
'Content-Type': 'application/json',
'User-Agent': 'V6Y',
},
});
/**
* Builds the configuration for the Gitlab API.
* @param organization
* @constructor
*/
const GitlabConfig = (organization: string | null): GitlabConfigType => {
const baseURL = organization ? `https://gitlab.${organization}.com` : 'https://gitlab.com';
return {
baseURL,
api: 'api/v4',
urls: {
repositoryDetailsUrl: (repoName: string) =>
`${baseURL}/api/v4/projects?search=${repoName}`,
fileContentUrl: (repoName: string, fileName: string) =>
`${baseURL}/api/v4/projects?search=${repoName}/${fileName}`,
repositoryDeploymentsUrl: (repoId: string) =>
`${baseURL}/api/v4/projects/${repoId}/deployments`,
repositoryMergeRequestsUrl: (repoId: string) =>
`${baseURL}/api/v4/projects/${repoId}/merge_requests`,
},
headers: {
'PRIVATE-TOKEN': process.env.GITLAB_PRIVATE_TOKEN || '',
'Content-Type': 'application/json',
},
};
};
/**
* Builds the query options for the API.
* @param organization
* @param type
*/
const buildQueryOptions = ({
organization,
type = 'gitlab',
}: BuildQueryOptions): GithubConfigType | GitlabConfigType =>
type === 'gitlab' ? GitlabConfig(organization!) : GithubConfig(organization!);
- All needed requests to Git is centralized inside
RepositoryApi.ts
file. - If any incompatibility between Github or Gitlab is detected, it should be handled inside
RepositoryApi.ts
. All Vitality apps, should pass by this file to make a request to any Git repository.
This ensures:
- A consistent API interface for all Vitality applications.
- Seamless Git provider switching without modifying application logic.
- Centralized maintenance, reducing the risk of API inconsistencies.
All Vitality applications must use RepositoryApi.ts for making any request to a Git repository.
Vitality uses DataDog to collect monitoring events. However, you can easily integrate your own monitoring platform by following these steps:
- Create a function named
fetch[your-monitoring-platform]Events()
, similar to the existingfetchDataDogEvents()
, inMonitoringApi.ts
. - Convert the fetched data into
MonitoringEvent
as defined inMonitoringType.ts
. This conversion function should be placed inMonitoringUtils.ts
. - Call the fetch and conversion functions within
getMonitoringEvents()
inMonitoringApi.ts
.
As a result, your getMonitoringEvents()
function should look like this:
const getMonitoringEvents = async ({ application, dateStartStr, dateEndStr }: GetEventsOptions) => {
try {
AppLogger.info(
`[EventApi - getEvents] Fetching events for application: ${application._id}`,
);
if (!application || !application.configuration?.dataDog) {
AppLogger.error(
`[EventApi - getEvents] Application or DataDog configuration is missing`,
);
return [];
}
const dateStartTimeStamp = formatStringToTimeStamp(dateStartStr, 'ms');
const dateEndTimeStamp = formatStringToTimeStamp(dateEndStr, 'ms');
const dataDogEvents = await fetchDataDogEvents({
dataDogConfig: application.configuration.dataDog,
dateStartTimeStamp,
dateEndTimeStamp,
});
const convertedEvents = convertDataDogEventsToMonitoringEvents({
dataDogEvents,
dateStartTimeStamp,
dateEndTimeStamp,
});
AppLogger.info(
`[EventApi - getEvents] Events fetched successfully: ${convertedEvents.length}`,
);
return convertedEvents;
} catch (error) {
AppLogger.error(
`[EventApi - getEvents] An exception occurred while fetching events: ${error}`,
);
return [];
}
};
If your monitoring platform requires API keys or additional configurations, make sure to:
- Store them in the database under the
configuration
column. - Update the schema in
ApplicationModel.ts
.
-
Check GitHub Issues for
good first issue
orhelp wanted
tags. -
Follow the Contribution Guide.
This project is licensed under the MIT License. See the LICENSE file for details.
For further assistance, contact our support team or open an issue on GitHub.