Team Unagi's repository for ICFPC 2022
- Takuya Akiba
- Kentaro Imajo
- Hiroaki Iwami
- Yoichi Iwata
- Toshiki Kataoka
- Naohiro Takahashi
We thoroughly used Rust for writing solvers, common logics, and tools. We also used Go and Python for utilities.
The main part of our approach is dynamic programming. We used heuristic-based swap preprocessing to make the whole problem easier, and applied local optimizations to improve the outputs.
For problems with initial canvas blocks, we just merged all the preexisting blocks to reset the canvas. We also relied on dynamic programming to find the optimal way.
Since the cost of an operation is inversely proportional to the area of the rectangle, we restrict all operations to be performed on large rectangles such that the upper right corner is (400,400). Then the problem can be solved exactly by the following dynamic programming.
dp[lx][ux][ly][uy] := minimum cost (including cleanup) for processing a rectangle [lx,ux)×[ly,uy)] using a canvas [lx, 400)×[ly,400)
We can compute dp[lx][ux][ly][uy] as the minimum of the following three cases.
- Painting with one color. We used the median color instead of the optimal color for efficiency.
- Cutting at x=v, dp[lx][v][ly][uy] + dp[v][ux][ly][uy] + (cut&merge cost)
- Cutting at y=v, cost of dp[lx][ux][ly][v] + dp[lx][ux][v][uy] + (cut&merge cost)
This algorithm takes 400^5 time. Because this is too slow, we do the following steps to speed up. First, we perform dynamic programming for all small (with height and width <= 40) rectangles. For each coordinate, we compute the frequency at which the cut operation is performed. We choose 100 coordinates with the highest frequency, and then perform the DP again for only on the rectangles whose endpoints are the selected coordinates. This improved the computation time to 100^5, which was fast enough by parallelization.
See src/wata.rs
for details.
In the problem presented in this issue, the cost of operations in the early stages is small and the cost of operations at the end of the game is large. In our program, we start building the image from a corner, so the operation cost in the opposite corner is high. Therefore, we swapped a portion of the target image in advance, moved the complex part to a less costly location, and moved it back at the end, thereby reducing the overall cost. The automatic swap program specializes in bringing the white areas to the corners. It detects white columns and rows and swaps them to the corners, thereby increasing the white area in the corners.
sample:
See src/chokudai1.rs
for details
For some problems, to utilize the characteristics of images, we manually crafted initial swap preprocessing. Generally we wrote ISL by hand, but we used several utilities, e.g., reversing the ISL program (see get_reversed_program
in src/wata.rs
)
We improved the ISL program generated by dynamic programming by applying local optimizations.
We mainly relied on local search, i.e., we tried to improve the cost by slightly modifying the program, e.g., removing an operation, modifying a coordinate of cut operations, etc.
We also applied compiler-like optimization techniques to remove unnecessary operations. For example, the function fix_cut_merge
performs a rule-based optimization:
- to eliminate
LineCut
-Merge
pair, and - to optimize order of successive
LineCut
s andMerge
s.
See src/local_optimize.rs
for details.
Apply dynamic programming with nn2 states:
+-+-+-+-+-+-+-+ ^ +-+-+-+-+-+-+-+ ^
| | | | | | |
+ + a + + + a
| | | | | | |
+-+-+-+-+-+-+-+ v + +-+-+-+-+ v
| | | | | | | | | | | |
+ +-+-+-+-+ + +-+-+-+-+
| | | | | | | | | | | |
+ +-+-+-+-+ and + +-+-+-+-+
| | | | | | | | | | | |
+ +-+-+-+-+ + +-+-+-+-+
| | | | | | | | | | | |
+ +-+-+-+-+ + +-+-+-+-+
| | | | | | | | | | | |
+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+
<- b -> <- b ->
See src/optmerge.rs
for details.
It is an essential part to quickly and precisely choose the color to fill a certain region. We used Weiszfeld's Method to compute geometric median. Coordinate-wise median is used for fast approximation.
See src/color.rs
for details.
Our solution for this year does not require a large number of runs. We did not proceed with automation that required hundreds of instances. Thanks to Rust, we safely parallelized our solution. Our program runs faster with more CPUs, so we worked on machines with 224 vCPUs.
We used Discord, Google Workspace, GitHub, and Pipedream.