You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
All subdirectories of ./user represent usermode libraries or programs. All of them provide a Makefile with all and install targets, first of them compiles the program using files currently available in sysroot, the former copies result into appropriate place in sysroot.
We currently describe dependencies between these programs in ./user/Makefile, like this:
but this mechanism doesn't work very well. First, in order to prevent recompiling everything all the time, these are order-only dependencies, so changes in one program don't transfer to another. Second, make is incapable of tracking dependencies between entire trees of subdirectories.
Another problem is that we mix everything together in the main sysroot directory: because of that, when a file in sysroot is updated, it's not easy to tell which user programs require to be compiled again (e.g. because libmimiker was modified), because we don't track the origin of files in sysroot. It is clear that make is not capable of managing this sort of dependencies, as it was only designed for micro-managing dependencies between files, and not entire projects.
My proposed solution is to prepare a python script which would manage the process of compiling user programs and gathering their files into a initrd.
All user programs/libraries (that is: subdirectories of ./user/) would also provide a simple dependency description file. For example, rules.json for prog might look like this:
{
"name": "prog",
"depends": ["stdc"]
}
The rules file might also contain information about which commands need to be executed in order to compile and install the program, if not default make / make install.
The sysroot-building python script would gather all information from user/* directories, to have a full image of how programs and 3drparty projects depend on each other. It would then compile them in the right order.
The python script would maintain a cache directory with installed files. For example, prog wouldn't get installed to sysroot/ directly, but to sysroot-cache/prog/ instead. This way the sysroot-cache directory kind of contains binary packages in form of directories.
(!!!) In order to compile user program X which depends on Y and Z, the python script would create a temporary build-sysroot by copying (merging) the contents of sysroot-cache/Y/ and sysroot-cache/Z/. The build-sysroot directory would be passed as --sysroot for compiling X. The actual compilation for X would use traditional make. When making X is done, the python script would use fakechroot to have X installed into sysroot-cache/X/. This way X doesn't need to be aware of the entire process, and everything is transparent!
Naturally, the python script would also track which projects need recompiling, but that becomes simple when we keep all packages as separate directory trees, it just needs to scan the sysroot-cache/X directory to see what is the most recently modified file, and only recompile what's necessary - using it's dependency information.
To prepare the actual initrd archive, all sysroot-cache/*/ directories are merged.
In practice, the manager script starts to be have as a minimal package manager, with sysroot-cache/* directories representing binary packages.
This mechanism exhibits additional useful properties:
Managing user libraries is free, they don't differ in any way from programs.
The process is completely transparent and user programs shouldn't need any adaptation.
Adding third party libraries and programs becomes super simple! Just drop a submodule or source directory into user, place a rules.json with dependency description, all done. (Because of submodules, I suppose we may need a possibility for placing program.rules.json directly in the ./user directory).
Compilation dependencies are airtight - only what is specified as dependencies will be present in build-sysroot used for compilation, so no unwanted stuff may be pulled in, even by mistake.
It is very scalable, and will stay simple to maintain even if we use a very large number of user libraries and programs.
(Original comment explaining the first version of this idea is here.)
The text was updated successfully, but these errors were encountered:
All subdirectories of
./user
represent usermode libraries or programs. All of them provide aMakefile
withall
andinstall
targets, first of them compiles the program using files currently available in sysroot, the former copies result into appropriate place in sysroot.We currently describe dependencies between these programs in
./user/Makefile
, like this:but this mechanism doesn't work very well. First, in order to prevent recompiling everything all the time, these are order-only dependencies, so changes in one program don't transfer to another. Second,
make
is incapable of tracking dependencies between entire trees of subdirectories.Another problem is that we mix everything together in the main
sysroot
directory: because of that, when a file insysroot
is updated, it's not easy to tell which user programs require to be compiled again (e.g. becauselibmimiker
was modified), because we don't track the origin of files insysroot
. It is clear thatmake
is not capable of managing this sort of dependencies, as it was only designed for micro-managing dependencies between files, and not entire projects.My proposed solution is to prepare a python script which would manage the process of compiling user programs and gathering their files into a
initrd
../user/
) would also provide a simple dependency description file. For example,rules.json
forprog
might look like this:The rules file might also contain information about which commands need to be executed in order to compile and install the program, if not default
make / make install
.The sysroot-building python script would gather all information from
user/*
directories, to have a full image of how programs and 3drparty projects depend on each other. It would then compile them in the right order.The python script would maintain a cache directory with installed files. For example,
prog
wouldn't get installed tosysroot/
directly, but tosysroot-cache/prog/
instead. This way thesysroot-cache
directory kind of contains binary packages in form of directories.(!!!) In order to compile user program X which depends on Y and Z, the python script would create a temporary
build-sysroot
by copying (merging) the contents ofsysroot-cache/Y/
andsysroot-cache/Z/
. Thebuild-sysroot
directory would be passed as--sysroot
for compiling X. The actual compilation for X would use traditional make. When making X is done, the python script would usefakechroot
to have X installed intosysroot-cache/X/
. This way X doesn't need to be aware of the entire process, and everything is transparent!Naturally, the python script would also track which projects need recompiling, but that becomes simple when we keep all packages as separate directory trees, it just needs to scan the
sysroot-cache/X
directory to see what is the most recently modified file, and only recompile what's necessary - using it's dependency information.To prepare the actual
initrd
archive, allsysroot-cache/*/
directories are merged.In practice, the manager script starts to be have as a minimal package manager, with
sysroot-cache/*
directories representing binary packages.This mechanism exhibits additional useful properties:
submodule
or source directory intouser
, place arules.json
with dependency description, all done. (Because of submodules, I suppose we may need a possibility for placingprogram.rules.json
directly in the./user
directory).build-sysroot
used for compilation, so no unwanted stuff may be pulled in, even by mistake.(Original comment explaining the first version of this idea is here.)
The text was updated successfully, but these errors were encountered: