Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AMORO-2204] Support Spark 3.5 for mixed format tables #3428

Open
wants to merge 79 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
79 commits
Select commit Hold shift + click to select a range
678db21
add tencent-dlc profile
zhoujinsong May 27, 2024
4707455
Add amoro-dlc Dockerfile
zhoujinsong May 27, 2024
5d63c77
Change derby url
zhoujinsong May 27, 2024
e6b5de4
Fix amoro-dlc Dockerfile
zhoujinsong May 27, 2024
a705e9d
Fix ams start cmd
zhoujinsong May 27, 2024
a07578e
Rollback change in entrypoint.sh
zhoujinsong May 27, 2024
e188da4
Merge branch 'master' into tencent-master
zhoujinsong May 28, 2024
cb26657
Remove lake fs dependencies
zhoujinsong May 29, 2024
ac53ebf
Add lake fs dependencies
zhoujinsong May 29, 2024
bf14a93
Change hive dependency to dlc version
zhoujinsong May 29, 2024
45e9a63
Remove hive version in tencent-dlc profile
zhoujinsong May 31, 2024
26c6053
Merge branch 'master' into tencent-master
zhoujinsong Jun 18, 2024
80538be
Merge branch 'master' into tencent-master
zhoujinsong Jun 19, 2024
6c51d11
merge master & fix conflicts
zhoujinsong Jun 20, 2024
41f523e
Merge branch 'master' into tencent-master
zhoujinsong Jun 20, 2024
7c5e3f3
Simplify amoro-dlc Dockerfile
zhoujinsong Jun 20, 2024
4801ef7
Copy entrypoint.sh
zhoujinsong Jun 20, 2024
3dd755a
Fix amoro-dlc Dockerfile
zhoujinsong Jun 20, 2024
656ad87
Fix amoro-dlc Dockerfile
zhoujinsong Jun 20, 2024
bb0e508
Fix amoro-dlc Dockerfile
zhoujinsong Jun 20, 2024
fef8744
Add some debug info for amoro-dlc Dockerfile
zhoujinsong Jun 20, 2024
5252f78
Fix amoro-dlc Dockerfile
zhoujinsong Jun 20, 2024
a2d3410
Merge branch 'master' into tencent-master
zhoujinsong Jun 21, 2024
d673c4b
Merge branch 'master' into tencent-master
zhoujinsong Jul 1, 2024
286e602
Add optimizer task class loader
zhoujinsong Jul 2, 2024
90cae09
Merge branch 'master' into fix-optimizer-class-conflicts
zhoujinsong Jul 4, 2024
4aaca15
Exclude not needed classes from spark optimizer
zhoujinsong Jul 4, 2024
87faa19
Fix conflicts & merge master
zhoujinsong Jul 4, 2024
dc06eb8
Merge branch 'tencent-master' into fix-optimizer-class-conflicts
zhoujinsong Jul 4, 2024
b4547a2
Merge branch 'master' into tencent-master
zhoujinsong Jul 8, 2024
00dbe21
Merge branch 'master' into fix-optimizer-class-conflicts
zhoujinsong Jul 8, 2024
21d62c9
Rollback unnecessary changes
zhoujinsong Jul 8, 2024
18990b7
Merge branch 'fix-optimizer-class-conflicts' into 'tencent-master' (m…
Jul 8, 2024
492fc86
Solve the problem of orc package conflict and thrift size limit.
Jul 1, 2024
e5c13fd
Optimize database configuration for mysql5.6.
Jul 10, 2024
1919e74
Modify thrift's maxMessageSize to 1000m.
Jul 10, 2024
91c39f6
Merge branch 'dev/xiaosefeng-fix-conflict' into 'tencent-master' (mer…
Jul 10, 2024
4079c4b
Merge master & Fix conflicts
zhoujinsong Jul 22, 2024
ba4e2d2
Merge master & Fix conflicts
zhoujinsong Jul 23, 2024
55c74de
Merge branch 'master' into tencent-master
zhoujinsong Jul 30, 2024
a9d48ce
Full support AMORO_CONF_DIR envrionment variable
zhoujinsong Jul 31, 2024
90f60fd
Fix a load-config.sh export error
zhoujinsong Jul 31, 2024
baa1d90
Add conf dir to class path
zhoujinsong Jul 31, 2024
498f8b7
Fix ams.sh classpath envrionment value error
zhoujinsong Jul 31, 2024
c54132d
Change hive version to 2.3.9 for dlc profile
zhoujinsong Aug 1, 2024
9d2bdaf
Determine change store by name
zhoujinsong Aug 5, 2024
4d9b154
Remove not needed list database calls
zhoujinsong Aug 5, 2024
dc28aa4
Allow configure catalog-impl for hive type
zhoujinsong Aug 5, 2024
b0ae623
Support catalog-impl property for spark mixed-format
zhoujinsong Aug 6, 2024
ad3e8cb
Fix checkstyle errors
zhoujinsong Aug 6, 2024
7a6faee
Merge branch 'master' into tencent-master
zhoujinsong Aug 7, 2024
8d32a16
Optimize catalog creation API, no need to upload files first (merge r…
Aug 14, 2024
97883a2
Merge master & Fix conflicts
zhoujinsong Sep 5, 2024
a3eb17d
Merge branch 'master' into tencent-master
zhoujinsong Sep 9, 2024
6f984b6
Merge branch 'master' into tencent-master
zhoujinsong Sep 13, 2024
f4b69ef
Fixed display error when switching catalog authentication mode
Sep 13, 2024
051e89e
Fix tencent-dlc docker file build error
zhoujinsong Sep 14, 2024
f8d8b52
Merge branch 'master' into tencent-master
zhoujinsong Sep 14, 2024
2b822eb
Merge branch 'dev/xiaosefeng-fix-updatecatalog' into 'tencent-master'…
Sep 14, 2024
1dcc693
Modify cos related dependency packages and dependency sources
Sep 14, 2024
26eb63a
Merge branch 'xiaosefeng-dependency-updates' into 'tencent-master' (m…
Sep 14, 2024
230cc4b
Merge branch 'master' into tencent-master
zhoujinsong Sep 25, 2024
9cd0005
Rewrite pos delete files not written by optimizing
zhoujinsong Sep 26, 2024
2317c5f
Merge branch 'master' into tencent-master
zhoujinsong Sep 26, 2024
3e543c1
Fix log4j2 info log rollover issue
zhoujinsong Sep 26, 2024
29c9479
Merge branch 'fix-info-log-rollover-issue' into tencent-master
zhoujinsong Sep 26, 2024
07e8a5f
Merge branch 'master' into tencent-master
zhoujinsong Oct 8, 2024
94052d6
Merge branch 'master' into tencent-master
zhoujinsong Oct 10, 2024
10b520b
Add amoro conf into classpath
zhoujinsong Oct 10, 2024
0cfe7a2
Merge master & fix conflicts
zhoujinsong Oct 23, 2024
0c1611f
Merge master & fix conflicts
zhoujinsong Nov 25, 2024
aec35b1
init mixed spark 3.5 modules
zhoujinsong Oct 23, 2024
b139643
adapt mixformate for spark3.5
Nov 29, 2024
57a7f13
Merge branch 'master' into add-mixed-spark-3.5
zhoujinsong Feb 5, 2025
a5d3d94
Rollback some unnecessary changes
zhoujinsong Feb 5, 2025
ee226b7
Rollback unnecessary changes
zhoujinsong Feb 5, 2025
0410465
Rollback unnecessary changes
zhoujinsong Feb 5, 2025
b42c758
Fixed some unit test issues
zhoujinsong Feb 5, 2025
6231a8d
Fixed some unit test issues
zhoujinsong Feb 5, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
package org.apache.amoro.server.table.internal;

import org.apache.amoro.mixed.InternalMixedIcebergCatalog;
import org.apache.amoro.table.MixedTable;

/** Constants defines for internal table */
public class InternalTableConstants {
Expand All @@ -37,6 +38,5 @@ public class InternalTableConstants {
public static final String OSS_PROTOCOL_PREFIX = "oss://";

public static final String CHANGE_STORE_TABLE_NAME_SUFFIX =
InternalMixedIcebergCatalog.CHANGE_STORE_SEPARATOR
+ InternalMixedIcebergCatalog.CHANGE_STORE_NAME;
InternalMixedIcebergCatalog.CHANGE_STORE_SEPARATOR + MixedTable.CHANGE_STORE_IDENTIFIER;
}
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ CREATE TABLE `table_identifier`
`table_name` varchar(128) NOT NULL COMMENT 'Table name',
PRIMARY KEY (`table_id`),
UNIQUE KEY `table_name_index` (`catalog_name`,`db_name`,`table_name`)
) ROW_FORMAT=DYNAMIC;
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT 'Table identifier for AMS' ROW_FORMAT=DYNAMIC;
INSERT INTO `table_identifier` (`catalog_name`, `db_name`, `table_name`) SELECT `catalog_name`, `db_name`, `table_name` FROM `table_metadata`;

-- table_metadata
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
import org.apache.amoro.io.IcebergDataTestHelpers;
import org.apache.amoro.io.MixedDataTestHelpers;
import org.apache.amoro.mixed.MixedTables;
import org.apache.amoro.properties.CatalogMetaProperties;
import org.apache.amoro.server.utils.IcebergTableUtil;
import org.apache.amoro.shade.guava32.com.google.common.collect.Lists;
import org.apache.amoro.shade.guava32.com.google.common.collect.Maps;
Expand Down Expand Up @@ -123,7 +124,12 @@ private static Table newIcebergTable(Catalog catalog) {
}

private static MixedTable newMixedTable(Catalog catalog, boolean withKey) {
MixedTables mixedTables = new MixedTables(TableMetaStore.EMPTY, Maps.newHashMap(), catalog);
MixedTables mixedTables =
new MixedTables(
TableMetaStore.EMPTY,
Maps.newHashMap(),
catalog,
CatalogMetaProperties.MIXED_FORMAT_TABLE_STORE_SEPARATOR_DEFAULT);
return mixedTables.createTable(
org.apache.amoro.table.TableIdentifier.of("cata", "db", "table"),
schema,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ public void setMaxReconnects(int maxReconnects) {

public static PoolConfig<?> forUrl(String url) {
PoolConfig<?> poolConfig = new PoolConfig<>();
URLEncodedUtils.parse(URI.create(url), Charset.defaultCharset())
URLEncodedUtils.parse(URI.create(url), String.valueOf(Charset.defaultCharset()))
.forEach(
pair -> {
try {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ public class ThriftClientPool<
private static final Logger LOG = LoggerFactory.getLogger(ThriftClientPool.class);
private static final int RECONNECT_INTERVAL = 2000;
private static final int BORROW_ATTEMPTS = 5;

private final ThriftClientFactory clientFactory;
private final ThriftPingFactory pingFactory;
private final GenericObjectPool<ThriftClient<T>> pool;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,10 @@ public class CatalogMetaProperties {
public static final String STORAGE_CONFIGS_KEY_HDFS_SITE = "hadoop.hdfs.site";
public static final String STORAGE_CONFIGS_KEY_CORE_SITE = "hadoop.core.site";
public static final String STORAGE_CONFIGS_KEY_HIVE_SITE = "hive.site";

public static final String STORAGE_CONFIGS_KEY_REGION = "storage.s3.region";
public static final String STORAGE_CONFIGS_KEY_S3_ENDPOINT = "storage.s3.endpoint";
public static final String STORAGE_CONFIGS_KEY_OSS_ENDPOINT = "storage.oss.endpoint";

public static final String STORAGE_CONFIGS_VALUE_TYPE_HDFS_LEGACY = "hdfs";
public static final String STORAGE_CONFIGS_VALUE_TYPE_HADOOP = "Hadoop";
public static final String STORAGE_CONFIGS_VALUE_TYPE_S3 = "S3";
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ public class BasicMixedIcebergCatalog implements MixedFormatCatalog {
private AmsClient client;
private MixedTables tables;
private SupportsNamespaces asNamespaceCatalog;
private String separator;

@Override
public String name() {
Expand Down Expand Up @@ -93,6 +94,7 @@ public void initialize(String name, Map<String, String> properties, TableMetaSto
}
this.databaseFilterPattern = databaseFilterPattern;
this.catalogProperties = properties;
this.separator = tableStoreSeparator();
this.tables = newMixedTables(metaStore, properties, icebergCatalog());
if (properties.containsKey(CatalogMetaProperties.AMS_URI)) {
this.client = new PooledAmsClient(properties.get(CatalogMetaProperties.AMS_URI));
Expand Down Expand Up @@ -221,7 +223,13 @@ protected MixedTables newMixedTables(
TableMetaStore metaStore,
Map<String, String> catalogProperties,
org.apache.iceberg.catalog.Catalog icebergCatalog) {
return new MixedTables(metaStore, catalogProperties, icebergCatalog);
return new MixedTables(metaStore, catalogProperties, icebergCatalog, separator);
}

protected String tableStoreSeparator() {
return catalogProperties.getOrDefault(
CatalogMetaProperties.MIXED_FORMAT_TABLE_STORE_SEPARATOR,
CatalogMetaProperties.MIXED_FORMAT_TABLE_STORE_SEPARATOR_DEFAULT);
}

private org.apache.iceberg.catalog.TableIdentifier toIcebergTableIdentifier(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
import org.apache.amoro.TableFormat;
import org.apache.amoro.shade.guava32.com.google.common.base.Preconditions;
import org.apache.amoro.shade.guava32.com.google.common.collect.Maps;
import org.apache.amoro.table.MixedTable;
import org.apache.amoro.table.PrimaryKeySpec;
import org.apache.amoro.table.TableMetaStore;
import org.apache.hadoop.conf.Configuration;
Expand All @@ -43,7 +44,6 @@
public class InternalMixedIcebergCatalog extends BasicMixedIcebergCatalog {

public static final String CHANGE_STORE_SEPARATOR = "@";
public static final String CHANGE_STORE_NAME = "change";

public static final String HTTP_HEADER_LIST_TABLE_FILTER = "LIST-TABLE-FILTER";

Expand Down Expand Up @@ -81,29 +81,23 @@ public List<org.apache.amoro.table.TableIdentifier> listTables(String database)
@Override
protected MixedTables newMixedTables(
TableMetaStore metaStore, Map<String, String> catalogProperties, Catalog icebergCatalog) {
return new InternalMixedTables(metaStore, catalogProperties, icebergCatalog);
return new InternalMixedTables(
metaStore, catalogProperties, icebergCatalog, tableStoreSeparator());
}

@Override
protected String tableStoreSeparator() {
return CHANGE_STORE_SEPARATOR;
}

static class InternalMixedTables extends MixedTables {

public InternalMixedTables(
TableMetaStore tableMetaStore, Map<String, String> catalogProperties, Catalog catalog) {
super(tableMetaStore, catalogProperties, catalog);
}

/**
* For internal table, using {table-name}@change as change store identifier, this identifier
* cloud be recognized by AMS. Due to '@' is an invalid character of table name, the change
* store identifier will never be conflict with other table name.
*
* @param baseIdentifier base store table identifier.
* @return change store iceberg table identifier.
*/
@Override
protected TableIdentifier generateChangeStoreIdentifier(TableIdentifier baseIdentifier) {
return TableIdentifier.of(
baseIdentifier.namespace(),
baseIdentifier.name() + CHANGE_STORE_SEPARATOR + CHANGE_STORE_NAME);
TableMetaStore tableMetaStore,
Map<String, String> catalogProperties,
Catalog catalog,
String separator) {
super(tableMetaStore, catalogProperties, catalog, separator);
}

/**
Expand All @@ -121,6 +115,13 @@ protected Table createChangeStore(
return tableMetaStore.doAs(() -> icebergCatalog.loadTable(changeIdentifier));
}

@Override
protected TableIdentifier generateChangeStoreIdentifier(TableIdentifier baseIdentifier) {
return TableIdentifier.of(
baseIdentifier.namespace(),
baseIdentifier.name() + CHANGE_STORE_SEPARATOR + MixedTable.CHANGE_STORE_IDENTIFIER);
}

/**
* The change store will be dropped automatically by AMS when dropping the base store, so we do
* nothing here
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@
import org.apache.amoro.TableFormat;
import org.apache.amoro.io.AuthenticatedFileIO;
import org.apache.amoro.io.AuthenticatedFileIOs;
import org.apache.amoro.properties.CatalogMetaProperties;
import org.apache.amoro.shade.guava32.com.google.common.collect.Maps;
import org.apache.amoro.table.BaseTable;
import org.apache.amoro.table.BasicKeyedTable;
Expand All @@ -32,6 +31,7 @@
import org.apache.amoro.table.TableMetaStore;
import org.apache.amoro.table.UnkeyedTable;
import org.apache.amoro.utils.MixedFormatCatalogUtil;
import org.apache.amoro.utils.MixedTableUtil;
import org.apache.amoro.utils.TablePropertyUtil;
import org.apache.iceberg.PartitionSpec;
import org.apache.iceberg.Schema;
Expand All @@ -47,16 +47,20 @@
public class MixedTables {
private static final Logger LOG = LoggerFactory.getLogger(MixedTables.class);

protected TableMetaStore tableMetaStore;
protected Catalog icebergCatalog;

protected Map<String, String> catalogProperties;
protected final TableMetaStore tableMetaStore;
protected final Catalog icebergCatalog;
protected final Map<String, String> catalogProperties;
protected final String separator;

public MixedTables(
TableMetaStore tableMetaStore, Map<String, String> catalogProperties, Catalog catalog) {
TableMetaStore tableMetaStore,
Map<String, String> catalogProperties,
Catalog catalog,
String separator) {
this.tableMetaStore = tableMetaStore;
this.icebergCatalog = catalog;
this.catalogProperties = catalogProperties;
this.separator = separator;
}

/**
Expand Down Expand Up @@ -86,12 +90,9 @@ public TableIdentifier parseChangeIdentifier(Table base) {
* @return change store table identifier.
*/
protected TableIdentifier generateChangeStoreIdentifier(TableIdentifier baseIdentifier) {
String separator =
catalogProperties.getOrDefault(
CatalogMetaProperties.MIXED_FORMAT_TABLE_STORE_SEPARATOR,
CatalogMetaProperties.MIXED_FORMAT_TABLE_STORE_SEPARATOR_DEFAULT);
return TableIdentifier.of(
baseIdentifier.namespace(), baseIdentifier.name() + separator + "change" + separator);
baseIdentifier.namespace(),
MixedTableUtil.changeStoreName(baseIdentifier.name(), separator));
}

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,8 @@
/** Represents an mixed-format table. */
public interface MixedTable extends Serializable {

String CHANGE_STORE_IDENTIFIER = "change";

/** Returns the {@link TableIdentifier} of this table */
TableIdentifier id();

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -190,4 +190,27 @@ public static PartitionSpec getMixedTablePartitionSpecById(MixedTable mixedTable
return spec;
}
}

/**
* Generate change store table name for mixed format tables.
*
* @param tableName mixed format table name.
* @param separator change store separator.
* @return change store table name.
*/
public static String changeStoreName(String tableName, String separator) {
return tableName + separator + MixedTable.CHANGE_STORE_IDENTIFIER + separator;
}

/**
* Determine if it is a change store of a mixed format table by the table name.
*
* @param tableName table name.
* @param separator change store separator.
* @return if it is a change store.
*/
public static boolean isChangeStore(String tableName, String separator) {
return tableName != null
&& tableName.endsWith(separator + MixedTable.CHANGE_STORE_IDENTIFIER + separator);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -220,10 +220,16 @@ protected void createHiveSource(
}

protected void createViewSource(Schema schema, List<Record> data) {
createViewSource(schema, data, Double.NaN);
}

protected void createViewSource(Schema schema, List<Record> data, double version) {
Dataset<Row> ds =
spark()
.createDataFrame(
data.stream().map(TestTableUtil::recordToRow).collect(Collectors.toList()),
data.stream()
.map(r -> TestTableUtil.recordToRow(r, version))
.collect(Collectors.toList()),
SparkSchemaUtil.convert(schema));

ds.createOrReplaceTempView(sourceTable);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,17 +72,17 @@
import java.util.stream.Collectors;

public class TestTableUtil {

private static Object[] recordToObjects(Record record) {
private static Object[] recordToObjects(Record record, double sparkVersion) {
Object[] values = new Object[record.size()];
for (int i = 0; i < values.length; i++) {
Object v = record.get(i);
if (v instanceof LocalDateTime) {
Timestamp ts =
Timestamp.valueOf(((LocalDateTime) v).atZone(ZoneOffset.UTC).toLocalDateTime());
Timestamp tsUTC = Timestamp.valueOf((LocalDateTime) v);
values[i] = ts;
continue;
if (Double.isNaN(sparkVersion) || sparkVersion < 3.4) {
Timestamp ts =
Timestamp.valueOf(((LocalDateTime) v).atZone(ZoneOffset.UTC).toLocalDateTime());
values[i] = ts;
continue;
}
} else if (v instanceof OffsetDateTime) {
v = new Timestamp(((OffsetDateTime) v).toInstant().toEpochMilli());
}
Expand All @@ -92,7 +92,11 @@ private static Object[] recordToObjects(Record record) {
}

public static Row recordToRow(Record record) {
Object[] values = recordToObjects(record);
return recordToRow(record, Double.NaN);
}

public static Row recordToRow(Record record, double sparkVersion) {
Object[] values = recordToObjects(record, sparkVersion);
return RowFactory.create(values);
}

Expand All @@ -103,6 +107,10 @@ public static InternalRow recordToInternalRow(Schema schema, Record record) {
}

public static Record rowToRecord(Row row, Types.StructType type) {
return rowToRecord(row, type, Double.NaN);
}

public static Record rowToRecord(Row row, Types.StructType type, double sparkVersion) {
Record record = GenericRecord.create(type);
for (int i = 0; i < type.fields().size(); i++) {
Object v = row.get(i);
Expand All @@ -114,9 +122,14 @@ public static Record rowToRecord(Row row, Types.StructType type) {
record.set(i, offsetDateTime);
continue;
} else if (field.type().equals(Types.TimestampType.withoutZone())) {
Preconditions.checkArgument(v instanceof Timestamp);
Object localDatetime = ((Timestamp) v).toLocalDateTime();
record.set(i, localDatetime);
if (Double.isNaN(sparkVersion) || sparkVersion < 3.4) {
Preconditions.checkArgument(v instanceof Timestamp);
Object localDatetime = ((Timestamp) v).toLocalDateTime();
record.set(i, localDatetime);
} else {
Preconditions.checkArgument(v instanceof LocalDateTime);
record.set(i, v);
}
continue;
}
record.set(i, v);
Expand Down
2 changes: 2 additions & 0 deletions amoro-format-mixed/amoro-mixed-spark/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -37,5 +37,7 @@
<module>v3.2/amoro-mixed-spark-runtime-3.2</module>
<module>v3.3/amoro-mixed-spark-3.3</module>
<module>v3.3/amoro-mixed-spark-runtime-3.3</module>
<module>v3.5/amoro-mixed-spark-3.5</module>
<module>v3.5/amoro-mixed-spark-runtime-3.5</module>
</modules>
</project>
Loading