diff --git a/README.md b/README.md index 686a1db..293faf2 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,170 @@ + +Version 3.1.1.45/2.8.3.45/2.7.2.45 + +修复问题: +1. 【功能】新增预读策略,优化顺序读性能 +2. 【优化】修复文件桶追加输出流的getPos语义的正确性 +3. 【优化】将文件桶truncate接口中的HadoopIllegalArgumentException异常封装为IOException +4. 【优化】快速删除trash目标对象存在时,添加时间戳后缀到ms级,并按最大时间做冲突重试 +5. 【优化】文件桶递归删除目录409冲突时按最大时间重试 +6. 【优化】create接口添加文件立即可见性开关,默认关闭 +7. 【优化】去掉HDFSWrapper中的默认shema必须为hdfs的检查 +8. 【优化】升级obs sdk版本为3.21.4.1,修复列举操作时最后文件的lastmodifytime字段不正确 +9. 【优化】修改HDFSWrapper中的rename抛出异常为IOException + +========================================================================= + +Version 3.1.1.43/2.8.3.43/2.7.2.43 + +修复问题: +1. 【功能】新增OBSWrapper特性,用于支持已hdfs schema方式对接OBS +2. 【功能】针对对象桶修复目录过大rename性能问题 +3. 【功能】修复快速删除功能trash时并发冲突问题 +4. 【用例】补充OBSWrapper功能用例 +5. 【用例】补充trash并发冲突场景用例 + +========================================================================= + +Version 3.1.1.42/2.8.3.42/2.7.2.42 + +修复问题: +1. 【功能】解决本地写缓存文件名字符串,进程退出才清理导致的OOM问题 +2. 【功能】替换依赖的hadoop-common2.8.3为修复安全漏洞的2.8.3.0101-hw-ei-14 +3. 【功能】Metrics统计删除冗余的参数,并提供close接口用于退出时释放资源 +4. 【功能】排除hadoop-common中对jetty非安全包的依赖 + +========================================================================= + +Version 3.1.1.41/2.8.3.41/2.7.2.41 + +修复问题: +1. 【功能】支持public接口的metrics统计 +2. 【功能】调用create()接口后,文件立即可见 +3. 【功能】支持FileSystem/InputStream/OutputStream关闭后拒绝接口访问 +4. 【功能】finalize()接口不再调用带I/O操作的close(),以防止上层误用 +5. 【功能】pom文件依赖排除hadoop-common 3.1.1引入的jackson-databind不合规版本 +6. 【功能】实现getCanonicalServiceName接口,用于提升HBase BulkLoad场景同一桶内加载性能 +7. 【用例】补充完善重试机制测试用例 + +========================================================================= + +Version 3.1.1.40/2.8.3.40/2.7.2.40 + +修复问题: +1. 【功能】接口语义排查,按原生HDFS语义实现 +2. 【功能】开源社区规范整改 +3. 【功能】truncate功能支持 +4. 【功能】目录名不带/细粒度授权policy支持 +5. 【功能】递归列举接口未更新目录lasymodifyTime问题修复 +6. 【功能】重试机制增强,最大支持180s时间重试 +7. 【功能】初始化支持探测写缓存的可访问性,开关控制,默认关闭 +8. 【功能】OBS JAVA SDK调整为3.20.6.1版本 + +========================================================================= + +Version 3.1.1.39/2.8.3.39/2.7.2.39 + +修复问题: +1. 【功能】静态检查和安全扫描整改 +2. 【功能】接口测试用例补充 + + +========================================================================= + + +Version 3.1.1.38/2.8.3.38/2.7.2.38 + +修复问题: +1. 【功能】修改listStatus接口返回的目录的lastModifyTime为准确值 + + +========================================================================= + + +Version 3.1.1.37/2.8.3.37/2.7.2.37 + +修复问题: +1. 【功能】seekInStream方法中skip改为while循环,及去掉available判断 +2. 【功能】ObsClient及socket buffer大小改为256KB +3. 【功能】OBS SDK 连接池优化开关可配置 +4. 【功能】OBSBlockOutputStream日志打印优化 + + +========================================================================= + +Version 3.1.1.36/2.8.3.36/2.7.2.36 + +修复问题: +1. 【功能】eSDK鉴权协商增加开关,默认关闭 +2. 【功能】初始化时head bucket增加异常重试逻辑 +3. 【功能】onReadFailure中reopen异常时增加重试逻辑 +4. 【功能】append stream在客户端记录文件长度,修复可能的异常 +5. 【功能】增加递归listStatus接口,提升大目录列举性能 +6. 【功能】trash对象时,如果trash目录下存在同名对象,新对象时间戳后缀不带冒号 +7. 【用例】 + (1)增加用例验证eSDK鉴权协商开关有效性 + (2)增加head bucket重试机制测试用例 + (3)增加onReadFailure中reopen重试机制测试用例 + (4)增加head文件长度无效时,append stream用例 + (5)增加listStatus递归列举用例 + (6)增加trash文件和trash目录的测试用例 + + +========================================================================= + +Version 3.1.1.35/2.8.3.35/2.7.2.35 + +修复问题: +1. 【功能】OBSFileInputStream.java中文件桶的rename()接口新增重试逻辑 + + +========================================================================= + +Version 3.1.1.33/2.8.3.33/2.7.2.33 + +修复问题: +1. 【功能】OBSBlockOutputStream finalize方法中关闭流,清理写缓存文件 +2. 【功能】解决listFiles接口递归列举时遇到子目录抛空指针异常问题 +3. 【功能】合入eSDK3.19.11.1版本,解决EcsObsCredentialsProvider偶现请求失败问题 +4. 【用例】 + (1)增加listFiles接口测试用例 + + +========================================================================= + +Version 3.1.1.32/2.8.3.32/2.7.2.32 + +修复问题: +1. 【功能】list超时客户端并发列举优化 +2. 【功能】read(ByteBuffer buffer)接口重载 +3. 【功能】对象桶rename接口部分场景未按HDFS原生语义实现修复 +4. 【功能】OBSFileInputStream.java中四个参数配置开关,默认走父类方法 +5. 【功能】fs.obs.read.buff.size和fs.obs.write.buff.size默认值调整为64KB +6. 【功能】readaheadInputStreamEnabled预读相关代码注释 +7. 【用例】 + (1)对象桶rename增加四个用例 + (2)增加read(ByteBuffer buffer)接口测试用例 + (3)增加四个参数read接口开关测试用例 + + +========================================================================= + +Version 3.1.1.31/2.8.3.31/2.7.2.31 + +修复问题: +1. 【功能】OBSFileInputStream.java中带四个参数read接口未按HDFS语义实现及返回长度bug修复 +2. 【功能】list和head接口返回对象最后修改时间不一致问题修复 +3. 【功能】Rename接口源和目的相同时,返回值未按原生HDFS语义实现问题修复 +4. 【功能】递归删除BUG解决 +5. 【用例】 + (1)完善getContentSummary用例,增加对递归删除的校验 + (2)增加四个参数read接口对不同入参处理和返回值判断用例 + (3)增加list和head对应用返回对象最后修改时间比较用例 + (4)增加rename接口源和目的相同时的用例 + + +========================================================================= + Version 3.1.1.30/2.8.3.30/2.7.2.30 修复问题: @@ -31,7 +198,6 @@ Version 3.1.1.29/2.8.3.27/2.7.2.29 ========================================================================= - Version 3.1.1.28/2.8.3.28/2.7.2.28 修复问题: diff --git a/hadoop-huaweicloud/dev-support/findbugs-exclude.xml b/hadoop-huaweicloud/dev-support/findbugs-exclude.xml index e79f84e..fa2556b 100644 --- a/hadoop-huaweicloud/dev-support/findbugs-exclude.xml +++ b/hadoop-huaweicloud/dev-support/findbugs-exclude.xml @@ -22,4 +22,9 @@ - + + + + + + \ No newline at end of file diff --git a/hadoop-huaweicloud/pom.xml b/hadoop-huaweicloud/pom.xml index 0a6579e..5b2555a 100644 --- a/hadoop-huaweicloud/pom.xml +++ b/hadoop-huaweicloud/pom.xml @@ -1,427 +1,447 @@ - - - - 4.0.0 - org.apache.hadoop - hadoop-huaweicloud - 2.8.3-hw-42 - Apache Hadoop OBS support - - This module contains code to support integration with OBS. - It also declares the dependencies needed to work with OBS services. - - jar - - - UTF-8 - true - ${project.build.directory}/test - 2.8.3 - 2.8.3.0101-hw-ei-14 - 42 - 3.20.6.1 - - - - - dist - - 2.8.3 - 2.8.3.0101-hw-ei-14 - 42 - 3.20.6.1 - obs.shaded - - - - - org.apache.maven.plugins - maven-shade-plugin - - - shade-obs-fs - package - - shade - - - - - - com.jamesmurty.utils - ${shading.prefix}.com.jamesmurty.utils - - - - okio - ${shading.prefix}.okio - - - - okhttp3 - ${shading.prefix}.okhttp3 - - - - com.fasterxml.jackson - ${shading.prefix}.com.fasterxml.jackson - - - - - - - - com.jamesmurty.utils:* - com.squareup.okio:* - com.squareup.okhttp3:* - com.huawei.storage:esdk-obs-java - com.fasterxml.jackson.core:* - org.apache.hadoop:hadoop-huaweicloud - - - - - - *:* - - META-INF/*.SF - META-INF/*.DSA - META-INF/*.RSA - log4j2.xml - - - - - - - - - - - - - - hadoop-huaweicloud-${hadoop.plat.version}-hw-${obsa.version} - - - org.codehaus.mojo - findbugs-maven-plugin - - true - true - ${basedir}/dev-support/findbugs-exclude.xml - - Max - - - - org.apache.maven.plugins - maven-project-info-reports-plugin - - false - - - - org.apache.maven.plugins - maven-surefire-plugin - - 3600 - - - - - org.apache.maven.plugins - maven-jar-plugin - - - - true - lib/ - - - - - - org.apache.maven.plugins - maven-dependency-plugin - - - copy - package - - copy-dependencies - - - ${project.build.directory}/lib - - - - - - org.apache.maven.plugins - maven-compiler-plugin - - 8 - 8 - - - - - - - - - org.apache.hadoop - hadoop-common - ${hadoop.version} - compile - - - jdk.tools - jdk.tools - - - commons-beanutils - commons-beanutils - - - commons-beanutils-core - commons-beanutils - - - jackson-core-asl - org.codehaus.jackson - - - jackson-mapper-asl - org.codehaus.jackson - - - jetty-util - org.mortbay.jetty - - - netty - io.netty - - - nimbus-jose-jwt - com.nimbusds - - - protobuf-java - com.google.protobuf - - - zookeeper - org.apache.zookeeper - - - jackson-databind - com.fasterxml.jackson.core - - - jetty-server - org.eclipse.jetty - - - jetty-servlet - org.eclipse.jetty - - - jetty-util - org.eclipse.jetty - - - jetty-util-ajax - org.eclipse.jetty - - - jetty-webapp - org.eclipse.jetty - - - - - org.apache.hadoop - hadoop-common - ${hadoop.version} - test - - - commons-beanutils - commons-beanutils - - - commons-beanutils-core - commons-beanutils - - - jackson-core-asl - org.codehaus.jackson - - - jackson-mapper-asl - org.codehaus.jackson - - - jetty-util - org.mortbay.jetty - - - netty - io.netty - - - nimbus-jose-jwt - com.nimbusds - - - protobuf-java - com.google.protobuf - - - zookeeper - org.apache.zookeeper - - - jackson-databind - com.fasterxml.jackson.core - - - jetty-server - org.eclipse.jetty - - - jetty-servlet - org.eclipse.jetty - - - jetty-util - org.eclipse.jetty - - - jetty-util-ajax - org.eclipse.jetty - - - jetty-webapp - org.eclipse.jetty - - - test-jar - - - junit - junit - 4.12 - test - - - org.mockito - mockito-all - 1.10.19 - test - - - org.apache.hadoop - hadoop-mapreduce-client-jobclient - ${hadoop.version} - test - - - netty - io.netty - - - protobuf-java - com.google.protobuf - - - - - org.apache.hadoop - hadoop-yarn-server-tests - ${hadoop.version} - test - - - jackson-core-asl - org.codehaus.jackson - - - jackson-mapper-asl - org.codehaus.jackson - - - jetty-util - org.mortbay.jetty - - - netty - io.netty - - - protobuf-java - com.google.protobuf - - - zookeeper - org.apache.zookeeper - - - jetty-util - org.eclipse.jetty - - - jetty-server - org.eclipse.jetty - - - jetty-util-ajax - org.eclipse.jetty - - - test-jar - - - org.apache.hadoop - hadoop-mapreduce-examples - ${hadoop.version} - test - jar - - - com.huawei.storage - esdk-obs-java - ${esdk.version} - - - org.powermock - powermock-api-mockito - 1.7.4 - test - - - org.powermock - powermock-module-junit4 - 1.7.4 - test - - - + + + + 4.0.0 + com.huaweicloud + hadoop-huaweicloud + 2.8.3-hw-45 + hadoop-huaweicloud + + This module contains code to support integration with OBS. + + jar + + + UTF-8 + 2.8.3.0101-hw-ei-14 + 3.21.4.1 + 2.8.3 + 45 + + + + + dist + + 2.8.3 + 2.8.3.0101-hw-ei-14 + 45 + 3.21.4.1 + obs.shaded + + + + + org.apache.maven.plugins + maven-shade-plugin + 3.2.1 + + false + + + + shade-obs-fs + package + + shade + + + + + + com.jamesmurty.utils + ${shading.prefix}.com.jamesmurty.utils + + + + okio + ${shading.prefix}.okio + + + + okhttp3 + ${shading.prefix}.okhttp3 + + + + com.fasterxml.jackson + ${shading.prefix}.com.fasterxml.jackson + + + + + + + + com.jamesmurty.utils:* + com.squareup.okio:* + com.squareup.okhttp3:* + com.huaweicloud:esdk-obs-java-optimised + com.fasterxml.jackson.core:* + + + + + + com.squareup.okhttp3:* + + okhttp3/internal/connection/ExchangeFinder.class + okhttp3/internal/connection/Transmitter.class + okhttp3/internal/http/RetryAndFollowUpInterceptor.class + + + + com.squareup.okio:* + + okio/AsyncTimeout.class + okio/SegmentPool.class + + + + *:* + + META-INF/*.SF + META-INF/*.DSA + META-INF/*.RSA + log4j2.xml + + + + + + + + + + + + + + hadoop-huaweicloud-${hadoop.plat.version}-hw-${obsa.version} + + + org.codehaus.mojo + findbugs-maven-plugin + 3.0.0 + + true + true + ${basedir}/dev-support/findbugs-exclude.xml + + Max + + + + org.apache.maven.plugins + maven-project-info-reports-plugin + 2.7 + + false + + + + org.apache.maven.plugins + maven-surefire-plugin + 2.12.4 + + 3600 + + + + + org.apache.maven.plugins + maven-jar-plugin + 2.5 + + + org.apache.maven.plugins + maven-dependency-plugin + 2.8 + + + copy + package + + copy-dependencies + + + ${project.build.directory}/lib + + + + + + org.apache.maven.plugins + maven-compiler-plugin + 3.1 + + 8 + 8 + -Xlint:unchecked + + + + + + + + org.apache.hadoop + hadoop-common + ${hadoop.version} + provided + + + jdk.tools + jdk.tools + + + commons-beanutils + commons-beanutils + + + commons-beanutils-core + commons-beanutils + + + jackson-core-asl + org.codehaus.jackson + + + jackson-mapper-asl + org.codehaus.jackson + + + jetty-util + org.mortbay.jetty + + + netty + io.netty + + + nimbus-jose-jwt + com.nimbusds + + + protobuf-java + com.google.protobuf + + + zookeeper + org.apache.zookeeper + + + jackson-databind + com.fasterxml.jackson.core + + + jetty-server + org.eclipse.jetty + + + jetty-servlet + org.eclipse.jetty + + + jetty-util + org.eclipse.jetty + + + jetty-util-ajax + org.eclipse.jetty + + + jetty-webapp + org.eclipse.jetty + + + + + org.apache.hadoop + hadoop-hdfs-client + ${hadoop.version} + provided + + + com.huaweicloud + esdk-obs-java-optimised + ${esdk.version} + + + org.apache.logging.log4j + log4j-api + + + org.apache.logging.log4j + log4j-core + + + + + + org.apache.hadoop + hadoop-common + ${hadoop.version} + test + + + jackson-core-asl + org.codehaus.jackson + + + jackson-mapper-asl + org.codehaus.jackson + + + jetty-util + org.mortbay.jetty + + + netty + io.netty + + + nimbus-jose-jwt + com.nimbusds + + + protobuf-java + com.google.protobuf + + + zookeeper + org.apache.zookeeper + + + jackson-databind + com.fasterxml.jackson.core + + + jetty-server + org.eclipse.jetty + + + jetty-util + org.eclipse.jetty + + + jetty-util-ajax + org.eclipse.jetty + + + test-jar + + + junit + junit + 4.12 + test + + + org.mockito + mockito-all + 1.10.19 + test + + + org.apache.hadoop + hadoop-mapreduce-client-jobclient + ${hadoop.version} + test + + + netty + io.netty + + + protobuf-java + com.google.protobuf + + + + + org.apache.hadoop + hadoop-yarn-server-tests + ${hadoop.version} + test + + + jackson-core-asl + org.codehaus.jackson + + + jackson-mapper-asl + org.codehaus.jackson + + + jetty-util + org.mortbay.jetty + + + netty + io.netty + + + protobuf-java + com.google.protobuf + + + zookeeper + org.apache.zookeeper + + + jetty-util + org.eclipse.jetty + + + jetty-server + org.eclipse.jetty + + + jetty-util-ajax + org.eclipse.jetty + + + test-jar + + + org.apache.hadoop + hadoop-mapreduce-examples + ${hadoop.version} + test + jar + + + org.powermock + powermock-api-mockito + 1.7.4 + test + + + org.powermock + powermock-module-junit4 + 1.7.4 + test + + + org.apache.hadoop + hadoop-minicluster + ${hadoop.version} + test + + + diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/BasicMetricsConsumer.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/BasicMetricsConsumer.java index 37eebb5..ac6ded8 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/BasicMetricsConsumer.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/BasicMetricsConsumer.java @@ -31,10 +31,12 @@ class MetricRecord { * Operation name, such as listStatus. */ private String opName; + /** * Operation result: true for success, or false for failure. */ private boolean success; + /** * Operation cost time in ms. */ @@ -43,40 +45,66 @@ class MetricRecord { private String opType; //opName - static final String READ = "read"; - static final String CLOSE = "close"; - static final String READFULLY = "readFully"; + public static final String READ = "read"; + + public static final String CLOSE = "close"; + + public static final String READFULLY = "readFully"; + static final String HFLUSH = "hflush"; + static final String WRITE = "write"; + static final String CREATE = "create"; + static final String CREATE_NR = "createNonRecursive"; + static final String APPEND = "append"; + static final String RENAME = "rename"; + static final String DELETE = "delete"; + static final String LIST_STATUS = "listStatus"; + static final String MKDIRS = "mkdirs"; + static final String GET_FILE_STATUS = "getFileStatus"; + static final String GET_CONTENT_SUMMARY = "getContentSummary"; + static final String COPYFROMLOCAL = "copyFromLocalFile"; + static final String LIST_FILES = "listFiles"; + static final String LIST_LOCATED_STS = "listLocatedStatus"; + static final String OPEN = "open"; //opType - static final String ONEBYTE = "1byte"; - static final String BYTEBUF = "byteBuf"; - static final String INPUT = "input"; - static final String RANDOM = "random"; - static final String SEQ = "seq"; + public static final String ONEBYTE = "1byte"; + + public static final String BYTEBUF = "byteBuf"; + + public static final String INPUT = "input"; + + public static final String RANDOM = "random"; + + public static final String SEQ = "seq"; + static final String OUTPUT = "output"; + static final String FLAGS = "flags"; + static final String NONRECURSIVE = "nonrecursive"; + static final String RECURSIVE = "recursive"; + static final String FS = "fs"; + static final String OVERWRITE = "overwrite"; - public MetricRecord(String opType, String opName, - boolean success, long costTime) { + public MetricRecord(String opType, String opName, boolean success, long costTime) { this.opName = opName; this.opType = opType; this.success = success; @@ -101,12 +129,8 @@ public String getOpType() { @Override public String toString() { - return "MetricRecord{" - + "opName='" + opName - + ", success=" + success - + ", costTime=" + costTime - + ", opType=" + opType - + '}'; + return "MetricRecord{" + "opName='" + opName + ", success=" + success + ", costTime=" + costTime + + ", opType=" + opType + '}'; } } diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/BlockingThreadPoolExecutorService.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/BlockingThreadPoolExecutorService.java index 2b5c230..7583478 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/BlockingThreadPoolExecutorService.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/BlockingThreadPoolExecutorService.java @@ -42,14 +42,12 @@ * /apache/s4/comm/staging/BlockingThreadPoolExecutorService.java) */ @InterfaceAudience.Private -final class BlockingThreadPoolExecutorService - extends SemaphoredDelegatingExecutor { +final class BlockingThreadPoolExecutorService extends SemaphoredDelegatingExecutor { /** * Class logger. */ - private static final Logger LOG = - LoggerFactory.getLogger(BlockingThreadPoolExecutorService.class); + private static final Logger LOG = LoggerFactory.getLogger(BlockingThreadPoolExecutorService.class); /** * Number of thread pools. @@ -61,10 +59,8 @@ final class BlockingThreadPoolExecutorService */ private final ThreadPoolExecutor eventProcessingExecutor; - private BlockingThreadPoolExecutorService( - final int permitCount, final ThreadPoolExecutor executor) { - super(MoreExecutors.listeningDecorator(executor), - permitCount, false); + private BlockingThreadPoolExecutorService(final int permitCount, final ThreadPoolExecutor executor) { + super(MoreExecutors.listeningDecorator(executor), permitCount, false); this.eventProcessingExecutor = executor; } @@ -83,8 +79,7 @@ private static ThreadFactory getNamedThreadFactory(final String prefix) { @Override public Thread newThread(@NotNull final Runnable r) { - final String name = prefix + "-pool" + poolNum + "-t" - + threadNumber.getAndIncrement(); + final String name = prefix + "-pool" + poolNum + "-t" + threadNumber.getAndIncrement(); return new Thread(r, name); } }; @@ -122,31 +117,20 @@ static ThreadFactory newDaemonThreadFactory(final String prefix) { * @param prefixName prefix of name for threads * @return new instance of BlockingThreadPoolExecutorService */ - static BlockingThreadPoolExecutorService newInstance( - final int activeTasks, final int waitingTasks, final long keepAliveTime, - final String prefixName) { + static BlockingThreadPoolExecutorService newInstance(final int activeTasks, final int waitingTasks, + final long keepAliveTime, final String prefixName) { /* Although we generally only expect up to waitingTasks tasks in the queue, we need to be able to buffer all tasks in case dequeueing is slower than enqueueing. */ - final BlockingQueue workQueue = new LinkedBlockingQueue<>( - waitingTasks + activeTasks); - ThreadPoolExecutor eventProcessingExecutor = - new ThreadPoolExecutor( - activeTasks, - activeTasks, - keepAliveTime, - TimeUnit.SECONDS, - workQueue, - newDaemonThreadFactory(prefixName), - (r, executor) -> { - // This is not expected to happen. - LOG.error("Could not submit task to executor {}", - executor.toString()); - }); + final BlockingQueue workQueue = new LinkedBlockingQueue<>(waitingTasks + activeTasks); + ThreadPoolExecutor eventProcessingExecutor = new ThreadPoolExecutor(activeTasks, activeTasks, keepAliveTime, + TimeUnit.SECONDS, workQueue, newDaemonThreadFactory(prefixName), (r, executor) -> { + // This is not expected to happen. + LOG.error("Could not submit task to executor {}", executor.toString()); + }); eventProcessingExecutor.allowCoreThreadTimeOut(true); - return new BlockingThreadPoolExecutorService( - waitingTasks + activeTasks, eventProcessingExecutor); + return new BlockingThreadPoolExecutorService(waitingTasks + activeTasks, eventProcessingExecutor); } /** @@ -160,7 +144,6 @@ private int getActiveCount() { @Override public String toString() { - return "BlockingThreadPoolExecutorService{" + super.toString() - + ", activeCount=" + getActiveCount() + '}'; + return "BlockingThreadPoolExecutorService{" + super.toString() + ", activeCount=" + getActiveCount() + '}'; } } diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/Constants.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/Constants.java deleted file mode 100644 index 37374d3..0000000 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/Constants.java +++ /dev/null @@ -1,280 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.obs; - -import org.apache.hadoop.classification.InterfaceAudience; -import org.apache.hadoop.classification.InterfaceStability; - -/** - * All the constants used with the {@link OBSFileSystem}. - * - *

Some of the strings are marked as {@code Unstable}. This means that they may be unsupported in - * future; at which point they will be marked as deprecated and simply ignored. - */ -@InterfaceAudience.Public -@InterfaceStability.Evolving -public final class Constants { - - /** The minimum multipart size which OBS supports. */ - public static final int MULTIPART_MIN_SIZE = 5 * 1024 * 1024; - // OBS access key - public static final String ACCESS_KEY = "fs.obs.access.key"; - // OBS secret key - public static final String SECRET_KEY = "fs.obs.secret.key"; - // obs credentials provider - public static final String OBS_CREDENTIALS_PROVIDER = "fs.obs.credentials.provider"; - // obs client security provider - public static final String OBS_SECURITY_PROVIDER = "fs.obs.security.provider"; - /** - * Extra set of security credentials which will be prepended to that set in {@code - * "hadoop.security.credential.provider.path"}. This extra option allows for per-bucket overrides. - */ - public static final String OBS_SECURITY_CREDENTIAL_PROVIDER_PATH = - "fs.obs.security.credential.provider.path"; - // session token for when using TemporaryOBSCredentialsProvider - public static final String SESSION_TOKEN = "fs.obs.session.token"; - // number of simultaneous connections to obs - public static final String MAXIMUM_CONNECTIONS = "fs.obs.connection.maximum"; - public static final int DEFAULT_MAXIMUM_CONNECTIONS = 1000; - // connect to obs over ssl? - public static final String SECURE_CONNECTIONS = "fs.obs.connection.ssl.enabled"; - public static final boolean DEFAULT_SECURE_CONNECTIONS = false; - // use a custom endpoint? - public static final String ENDPOINT = "fs.obs.endpoint"; - // connect to obs through a proxy server? - public static final String PROXY_HOST = "fs.obs.proxy.host"; - public static final String PROXY_PORT = "fs.obs.proxy.port"; - public static final String PROXY_USERNAME = "fs.obs.proxy.username"; - public static final String PROXY_PASSWORD = "fs.obs.proxy.password"; - // number of times we should retry errors - public static final String MAX_ERROR_RETRIES = "fs.obs.attempts.maximum"; - public static final int DEFAULT_MAX_ERROR_RETRIES = 3; - // seconds until we give up trying to establish a connection to obs - public static final String ESTABLISH_TIMEOUT = "fs.obs.connection.establish.timeout"; - public static final int DEFAULT_ESTABLISH_TIMEOUT = 120000; - // seconds until we give up on a connection to obs - public static final String SOCKET_TIMEOUT = "fs.obs.connection.timeout"; - public static final int DEFAULT_SOCKET_TIMEOUT = 120000; - // socket send buffer to be used in OBS SDK - public static final String SOCKET_SEND_BUFFER = "fs.obs.socket.send.buffer"; - public static final int DEFAULT_SOCKET_SEND_BUFFER = 64 * 1024; - // socket send buffer to be used in OBS SDK - public static final String SOCKET_RECV_BUFFER = "fs.obs.socket.recv.buffer"; - public static final int DEFAULT_SOCKET_RECV_BUFFER = 64 * 1024; - // number of records to get while paging through a directory listing - public static final String MAX_PAGING_KEYS = "fs.obs.paging.maximum"; - public static final int DEFAULT_MAX_PAGING_KEYS = 1000; - // the maximum number of threads to allow in the pool used by TransferManager - public static final String MAX_THREADS = "fs.obs.threads.max"; - public static final int DEFAULT_MAX_THREADS = 20; - // the maximum number of tasks cached if all threads are already uploading - public static final String MAX_TOTAL_TASKS = "fs.obs.max.total.tasks"; - public static final int DEFAULT_MAX_TOTAL_TASKS = 20; - -// public static final String CORE_COPY_THREADS = "fs.obs.copy.threads.core"; -// public static final int DEFAULT_CORE_COPY_THREADS = 20; - public static final String MAX_COPY_THREADS = "fs.obs.copy.threads.max"; - public static final int DEFAULT_MAX_COPY_THREADS = 40; - -// public static final String CORE_DELETE_THREADS = "fs.obs.delete.threads.core"; -// public static final int DEFAULT_CORE_DELETE_THREADS = 10; - public static final String MAX_DELETE_THREADS = "fs.obs.delete.threads.max"; - public static final int DEFAULT_MAX_DELETE_THREADS = 20; - - //Read thread configuration for read-ahead input stream - public static final String MAX_READ_THREADS = "fs.obs.threads.read.max"; - public static final int DEFAULT_MAX_READ_THREADS = 20; -// public static final String CORE_READ_THREADS = "fs.obs.threads.read.core"; -// public static final int DEFAULT_CORE_READ_THREADS = 5; - // Use read-ahead input stream - public static final String READAHEAD_INPUTSTREAM_ENABLED = "fs.obs.readahead.inputstream.enabled"; - public static final boolean READAHEAD_INPUTSTREAM_ENABLED_DEFAULT = false; - public static final String BUFFER_PART_SIZE = "fs.obs.buffer.part.size"; - public static final int DEFAULT_BUFFER_PART_SIZE = 64 * 1024; - public static final String BUFFER_MAX_RANGE = "fs.obs.buffer.max.range"; - public static final int DEFAULT_BUFFER_MAX_RANGE = 20 * 1024 * 1024; - // unused option: maintained for compile-time compatibility. - // if set, a warning is logged in OBS during init - @Deprecated public static final String CORE_THREADS = "fs.obs.threads.core"; - // the time an idle thread waits before terminating - public static final String KEEPALIVE_TIME = "fs.obs.threads.keepalivetime"; - public static final int DEFAULT_KEEPALIVE_TIME = 60; - // size of each of or multipart pieces in bytes - public static final String MULTIPART_SIZE = "fs.obs.multipart.size"; - public static final long DEFAULT_MULTIPART_SIZE = 104857600; // 100 MB - // minimum size in bytes before we start a multipart uploads or copy - public static final String MIN_MULTIPART_THRESHOLD = "fs.obs.multipart.threshold"; - public static final long DEFAULT_MIN_MULTIPART_THRESHOLD = Integer.MAX_VALUE; - // enable multiobject-delete calls? - public static final String ENABLE_MULTI_DELETE = "fs.obs.multiobjectdelete.enable"; - // max number of objects in one multiobject-delete call. - // this option takes effect only when the option 'ENABLE_MULTI_DELETE' is set to 'true'. - public static final String MULTI_DELETE_MAX_NUMBER = "fs.obs.multiobjectdelete.maximum"; - public static final int MULTI_DELETE_DEFAULT_NUMBER = 1000; - // delete recursively or not. - public static final String MULTI_DELETE_RECURSION = "fs.obs.multiobjectdelete.recursion"; - // minimum number of objects in one multiobject-delete call - public static final String MULTI_DELETE_THRESHOLD = "fs.obs.multiobjectdelete.threshold"; - public static final int MULTI_DELETE_DEFAULT_THRESHOLD = 3; - // support to rename a folder to an empty folder or not. - public static final String RENAME_TO_EMPTY_FOLDER = "fs.obs.rename.to_empty_folder"; - // comma separated list of directories - public static final String BUFFER_DIR = "fs.obs.buffer.dir"; - // switch to the fast block-by-block upload mechanism - public static final String FAST_UPLOAD = "fs.obs.fast.upload"; - public static final boolean DEFAULT_FAST_UPLOAD = true; - /** What buffer to use. Default is {@link #FAST_UPLOAD_BUFFER_DISK} Value: {@value} */ - @InterfaceStability.Unstable - public static final String FAST_UPLOAD_BUFFER = "fs.obs.fast.upload.buffer"; - - // initial size of memory buffer for a fast upload - // @Deprecated - // public static final String FAST_BUFFER_SIZE = "fs.obs.fast.buffer.size"; - // public static final int DEFAULT_FAST_BUFFER_SIZE = 1048576; //1MB - /** Buffer blocks to disk: {@value}. Capacity is limited to available disk space. */ - @InterfaceStability.Unstable public static final String FAST_UPLOAD_BUFFER_DISK = "disk"; - /** Use an in-memory array. Fast but will run of heap rapidly: {@value}. */ - @InterfaceStability.Unstable public static final String FAST_UPLOAD_BUFFER_ARRAY = "array"; - /** - * Use a byte buffer. May be more memory efficient than the {@link #FAST_UPLOAD_BUFFER_ARRAY}: - * {@value}. - */ - @InterfaceStability.Unstable public static final String FAST_UPLOAD_BYTEBUFFER = "bytebuffer"; - /** Default buffer option: {@value}. */ - @InterfaceStability.Unstable - public static final String DEFAULT_FAST_UPLOAD_BUFFER = FAST_UPLOAD_BUFFER_DISK; - /** - * Maximum Number of blocks a single output stream can have active (uploading, or queued to the - * central FileSystem instance's pool of queued operations. This stops a single stream overloading - * the shared thread pool. {@value} - * - *

Default is {@link #DEFAULT_FAST_UPLOAD_ACTIVE_BLOCKS} - */ - @InterfaceStability.Unstable - public static final String FAST_UPLOAD_ACTIVE_BLOCKS = "fs.obs.fast.upload.active.blocks"; - /** Limit of queued block upload operations before writes block. Value: {@value} */ - @InterfaceStability.Unstable public static final int DEFAULT_FAST_UPLOAD_ACTIVE_BLOCKS = 4; - // Private | PublicRead | PublicReadWrite | AuthenticatedRead | - // LogDeliveryWrite | BucketOwnerRead | BucketOwnerFullControl - public static final String CANNED_ACL = "fs.obs.acl.default"; - public static final String DEFAULT_CANNED_ACL = ""; - // should we try to purge old multipart uploads when starting up - public static final String PURGE_EXISTING_MULTIPART = "fs.obs.multipart.purge"; - public static final boolean DEFAULT_PURGE_EXISTING_MULTIPART = false; - // purge any multipart uploads older than this number of seconds - public static final String PURGE_EXISTING_MULTIPART_AGE = "fs.obs.multipart.purge.age"; - public static final long DEFAULT_PURGE_EXISTING_MULTIPART_AGE = 86400; - public static final String OBS_FOLDER_SUFFIX = "_$folder$"; - public static final String FS_OBS_BLOCK_SIZE = "fs.obs.block.size"; - public static final String FS_OBS = "obs"; - /** Prefix for all OBS properties: {@value}. */ - public static final String FS_OBS_PREFIX = "fs.obs."; - /** Prefix for OBS bucket-specific properties: {@value}. */ - public static final String FS_OBS_BUCKET_PREFIX = "fs.obs.bucket."; - - public static final int OBS_DEFAULT_PORT = -1; - public static final String USER_AGENT_PREFIX = "fs.obs.user.agent.prefix"; - /** read ahead buffer size to prevent connection re-establishments. */ - public static final String READAHEAD_RANGE = "fs.obs.readahead.range"; - - public static final long DEFAULT_READAHEAD_RANGE = 1 * 1024 * 1024; - /** - * Which input strategy to use for buffering, seeking and similar when reading data. Value: - * {@value} - */ - @InterfaceStability.Unstable - public static final String INPUT_FADVISE = "fs.obs.experimental.input.fadvise"; - /** General input. Some seeks, some reads. Value: {@value} */ - @InterfaceStability.Unstable public static final String INPUT_FADV_NORMAL = "normal"; - /** Optimized for sequential access. Value: {@value} */ - @InterfaceStability.Unstable public static final String INPUT_FADV_SEQUENTIAL = "sequential"; - /** - * Optimized purely for random seek+read/positionedRead operations; The performance of sequential - * IO may be reduced in exchange for more efficient {@code seek()} operations. Value: {@value} - */ - @InterfaceStability.Unstable public static final String INPUT_FADV_RANDOM = "random"; - - @InterfaceAudience.Private @InterfaceStability.Unstable - public static final String OBS_CLIENT_FACTORY_IMPL = "fs.obs.s3.client.factory.impl"; - - @InterfaceAudience.Private @InterfaceStability.Unstable - public static final Class DEFAULT_OBS_CLIENT_FACTORY_IMPL = - ObsClientFactory.DefaultObsClientFactory.class; - /** Maximum number of partitions in a multipart upload: {@value}. */ - @InterfaceAudience.Private public static final int MAX_MULTIPART_COUNT = 10000; - /** OBS Client configuration */ - // idleConnectionTime - public static final String IDLE_CONNECTION_TIME = "fs.obs.idle.connection.time"; - - public static final int DEFAULT_IDLE_CONNECTION_TIME = 30000; - // maxIdleConnections - public static final String MAX_IDLE_CONNECTIONS = "fs.obs.max.idle.connections"; - public static final int DEFAULT_MAX_IDLE_CONNECTIONS = 1000; - // keepAlive - public static final String KEEP_ALIVE = "fs.obs.keep.alive"; - public static final boolean DEFAULT_KEEP_ALIVE = true; - // verifyResponseContentType - public static final String VALIDATE_CERTIFICATE = "fs.obs.validate.certificate"; - public static final boolean DEFAULT_VALIDATE_CERTIFICATE = false; - // verifyResponseContentType - public static final String VERIFY_RESPONSE_CONTENT_TYPE = "fs.obs.verify.response.content.type"; - public static final boolean DEFAULT_VERIFY_RESPONSE_CONTENT_TYPE = true; - // uploadStreamRetryBufferSize - public static final String UPLOAD_STREAM_RETRY_SIZE = "fs.obs.upload.stream.retry.buffer.size"; - public static final int DEFAULT_UPLOAD_STREAM_RETRY_SIZE = 512 * 1024; - // readBufferSize - public static final String READ_BUFFER_SIZE = "fs.obs.read.buffer.size"; - public static final int DEFAULT_READ_BUFFER_SIZE = 8192; - // writeBufferSize - public static final String WRITE_BUFFER_SIZE = "fs.obs.write.buffer.size"; - public static final int DEFAULT_WRITE_BUFFER_SIZE = 8192; - // cname - public static final String CNAME = "fs.obs.cname"; - public static final boolean DEFAULT_CNAME = false; - // isStrictHostnameVerification - public static final String STRICT_HOSTNAME_VERIFICATION = "fs.obs.strict.hostname.verification"; - public static final boolean DEFAULT_STRICT_HOSTNAME_VERIFICATION = false; - // test Path - public static final String PATH_LOCAL_TEST = "fs.obs.test.local.path"; - // size of object copypart pieces in bytes - public static final String COPY_PART_SIZE = "fs.obs.copypart.size"; - public static final long MAX_COPY_PART_SIZE = 5368709120L; // 5GB - public static final long DEFAULT_COPY_PART_SIZE = 104857600L; // 100MB - -// public static final String CORE_COPY_PART_THREADS = "fs.obs.copypart.threads.core"; -// public static final int DEFAULT_CORE_COPY_PART_THREADS = 20; - public static final String MAX_COPY_PART_THREADS = "fs.obs.copypart.threads.max"; - public static final int DEFAULT_MAX_COPY_PART_THREADS = 40; - // switch to the fast delete - public static final String TRASH_ENALBLE = "fs.obs.trash.enable"; - public static final String OBS_CONTENT_SUMMARY_ENABLE = "fs.obs.content.summary.enable"; - public static final boolean DEFAULT_TRASH = false; - // The fast delete recycle directory - public static final String TRASH_DIR = "fs.obs.trash.dir"; - - // encryption type is sse-kms or sse-c - public static final String SSE_TYPE = "fs.obs.server-side-encryption-type"; - // kms key id for sse-kms, while key base64 encoded content for sse-c - public static final String SSE_KEY = "fs.obs.server-side-encryption-key"; - // array first block size - public static final String FAST_UPLOAD_BUFFER_ARRAY_FIRST_BLOCK_SIZE = "fs.obs.fast.upload.array.first.buffer"; - public static final int FAST_UPLOAD_BUFFER_ARRAY_FIRST_BLOCK_SIZE_DEFAULT = 1 * 1024 * 1024; - - private Constants() {} -} diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/DefaultMetricsConsumer.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/DefaultMetricsConsumer.java index 2dca199..4aff4af 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/DefaultMetricsConsumer.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/DefaultMetricsConsumer.java @@ -32,8 +32,7 @@ class DefaultMetricsConsumer implements BasicMetricsConsumer { /** * Class logger. */ - private static final Logger LOG = LoggerFactory.getLogger( - DefaultMetricsConsumer.class); + private static final Logger LOG = LoggerFactory.getLogger(DefaultMetricsConsumer.class); /** * URI of the FileSystem instance. @@ -48,14 +47,12 @@ class DefaultMetricsConsumer implements BasicMetricsConsumer { /** * Default metrics consumer that prints debug logs. * - * @param uriName URI of the owner FileSystem + * @param uriName URI of the owner FileSystem */ - DefaultMetricsConsumer(final URI uriName, - final Configuration configuration) { + DefaultMetricsConsumer(final URI uriName, final Configuration configuration) { this.uri = uriName; this.conf = configuration; - LOG.debug("DefaultMetricsConsumer with URI [{}] and " - + "Configuration[{}]", this.uri, this.conf); + LOG.debug("DefaultMetricsConsumer with URI [{}] and " + "Configuration[{}]", this.uri, this.conf); } /** @@ -67,10 +64,8 @@ class DefaultMetricsConsumer implements BasicMetricsConsumer { @Override public boolean putMetrics(MetricRecord metricRecord) { if (LOG.isDebugEnabled()) { - LOG.debug("[Metrics]: url[{}], opName [{}], costTime[{}], " - + "opResult[{}]", this.uri, - metricRecord.getOpName(),metricRecord.getCostTime(), - metricRecord.isSuccess()); + LOG.debug("[Metrics]: url[{}], opName [{}], costTime[{}], " + "opResult[{}]", this.uri, + metricRecord.getOpName(), metricRecord.getCostTime(), metricRecord.isSuccess()); } return true; } diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/DefaultOBSClientFactory.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/DefaultOBSClientFactory.java index 1234c16..2c9efc6 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/DefaultOBSClientFactory.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/DefaultOBSClientFactory.java @@ -45,8 +45,7 @@ class DefaultOBSClientFactory extends Configured implements OBSClientFactory { /** * Class logger. */ - private static final Logger LOG = LoggerFactory.getLogger( - DefaultOBSClientFactory.class); + private static final Logger LOG = LoggerFactory.getLogger(DefaultOBSClientFactory.class); /** * Initializes all OBS SDK settings related to connection management. @@ -54,107 +53,85 @@ class DefaultOBSClientFactory extends Configured implements OBSClientFactory { * @param conf Hadoop configuration * @param obsConf OBS SDK configuration */ - private static void initConnectionSettings(final Configuration conf, - final ExtObsConfiguration obsConf) { + private static void initConnectionSettings(final Configuration conf, final ExtObsConfiguration obsConf) { obsConf.setMaxConnections( - OBSCommonUtils.intOption(conf, OBSConstants.MAXIMUM_CONNECTIONS, - OBSConstants.DEFAULT_MAXIMUM_CONNECTIONS, + OBSCommonUtils.intOption(conf, OBSConstants.MAXIMUM_CONNECTIONS, OBSConstants.DEFAULT_MAXIMUM_CONNECTIONS, 1)); - boolean secureConnections = conf.getBoolean( - OBSConstants.SECURE_CONNECTIONS, + boolean secureConnections = conf.getBoolean(OBSConstants.SECURE_CONNECTIONS, OBSConstants.DEFAULT_SECURE_CONNECTIONS); String originEndPoint = conf.getTrimmed(OBSConstants.ENDPOINT, ""); - if (!originEndPoint.isEmpty() - && !originEndPoint.startsWith(OBSConstants.HTTP_PREFIX) + if (!originEndPoint.isEmpty() && !originEndPoint.startsWith(OBSConstants.HTTP_PREFIX) && !originEndPoint.startsWith(OBSConstants.HTTPS_PREFIX)) { String newEndPointWithSchema; if (secureConnections) { - newEndPointWithSchema = - OBSConstants.HTTPS_PREFIX + originEndPoint; + newEndPointWithSchema = OBSConstants.HTTPS_PREFIX + originEndPoint; } else { - newEndPointWithSchema = - OBSConstants.HTTP_PREFIX + originEndPoint; + newEndPointWithSchema = OBSConstants.HTTP_PREFIX + originEndPoint; } conf.set(OBSConstants.ENDPOINT, newEndPointWithSchema); } obsConf.setMaxErrorRetry( - OBSCommonUtils.intOption(conf, OBSConstants.MAX_ERROR_RETRIES, - OBSConstants.DEFAULT_MAX_ERROR_RETRIES, 0)); + OBSCommonUtils.intOption(conf, OBSConstants.MAX_ERROR_RETRIES, OBSConstants.DEFAULT_MAX_ERROR_RETRIES, 0)); obsConf.setConnectionTimeout( - OBSCommonUtils.intOption(conf, OBSConstants.ESTABLISH_TIMEOUT, - OBSConstants.DEFAULT_ESTABLISH_TIMEOUT, 0)); + OBSCommonUtils.intOption(conf, OBSConstants.ESTABLISH_TIMEOUT, OBSConstants.DEFAULT_ESTABLISH_TIMEOUT, 0)); obsConf.setSocketTimeout( - OBSCommonUtils.intOption(conf, OBSConstants.SOCKET_TIMEOUT, - OBSConstants.DEFAULT_SOCKET_TIMEOUT, 0)); + OBSCommonUtils.intOption(conf, OBSConstants.SOCKET_TIMEOUT, OBSConstants.DEFAULT_SOCKET_TIMEOUT, 0)); obsConf.setIdleConnectionTime( - OBSCommonUtils.intOption(conf, OBSConstants.IDLE_CONNECTION_TIME, - OBSConstants.DEFAULT_IDLE_CONNECTION_TIME, + OBSCommonUtils.intOption(conf, OBSConstants.IDLE_CONNECTION_TIME, OBSConstants.DEFAULT_IDLE_CONNECTION_TIME, 1)); obsConf.setMaxIdleConnections( - OBSCommonUtils.intOption(conf, OBSConstants.MAX_IDLE_CONNECTIONS, - OBSConstants.DEFAULT_MAX_IDLE_CONNECTIONS, + OBSCommonUtils.intOption(conf, OBSConstants.MAX_IDLE_CONNECTIONS, OBSConstants.DEFAULT_MAX_IDLE_CONNECTIONS, 1)); obsConf.setReadBufferSize( - OBSCommonUtils.intOption(conf, OBSConstants.READ_BUFFER_SIZE, - OBSConstants.DEFAULT_READ_BUFFER_SIZE, + OBSCommonUtils.intOption(conf, OBSConstants.READ_BUFFER_SIZE, OBSConstants.DEFAULT_READ_BUFFER_SIZE, -1)); // to be // modified obsConf.setWriteBufferSize( - OBSCommonUtils.intOption(conf, OBSConstants.WRITE_BUFFER_SIZE, - OBSConstants.DEFAULT_WRITE_BUFFER_SIZE, + OBSCommonUtils.intOption(conf, OBSConstants.WRITE_BUFFER_SIZE, OBSConstants.DEFAULT_WRITE_BUFFER_SIZE, -1)); // to be obsConf.setSocketReadBufferSize( - OBSCommonUtils.intOption(conf, OBSConstants.SOCKET_RECV_BUFFER, - OBSConstants.DEFAULT_SOCKET_RECV_BUFFER, -1)); + OBSCommonUtils.intOption(conf, OBSConstants.SOCKET_RECV_BUFFER, OBSConstants.DEFAULT_SOCKET_RECV_BUFFER, + -1)); obsConf.setSocketWriteBufferSize( - OBSCommonUtils.intOption(conf, OBSConstants.SOCKET_SEND_BUFFER, - OBSConstants.DEFAULT_SOCKET_SEND_BUFFER, -1)); + OBSCommonUtils.intOption(conf, OBSConstants.SOCKET_SEND_BUFFER, OBSConstants.DEFAULT_SOCKET_SEND_BUFFER, + -1)); - obsConf.setKeepAlive(conf.getBoolean(OBSConstants.KEEP_ALIVE, - OBSConstants.DEFAULT_KEEP_ALIVE)); + obsConf.setKeepAlive(conf.getBoolean(OBSConstants.KEEP_ALIVE, OBSConstants.DEFAULT_KEEP_ALIVE)); obsConf.setValidateCertificate( - conf.getBoolean(OBSConstants.VALIDATE_CERTIFICATE, - OBSConstants.DEFAULT_VALIDATE_CERTIFICATE)); - obsConf.setVerifyResponseContentType( - conf.getBoolean(OBSConstants.VERIFY_RESPONSE_CONTENT_TYPE, - OBSConstants.DEFAULT_VERIFY_RESPONSE_CONTENT_TYPE)); - obsConf.setCname( - conf.getBoolean(OBSConstants.CNAME, OBSConstants.DEFAULT_CNAME)); - obsConf.setIsStrictHostnameVerification( - conf.getBoolean(OBSConstants.STRICT_HOSTNAME_VERIFICATION, - OBSConstants.DEFAULT_STRICT_HOSTNAME_VERIFICATION)); + conf.getBoolean(OBSConstants.VALIDATE_CERTIFICATE, OBSConstants.DEFAULT_VALIDATE_CERTIFICATE)); + obsConf.setVerifyResponseContentType(conf.getBoolean(OBSConstants.VERIFY_RESPONSE_CONTENT_TYPE, + OBSConstants.DEFAULT_VERIFY_RESPONSE_CONTENT_TYPE)); + obsConf.setCname(conf.getBoolean(OBSConstants.CNAME, OBSConstants.DEFAULT_CNAME)); + obsConf.setIsStrictHostnameVerification(conf.getBoolean(OBSConstants.STRICT_HOSTNAME_VERIFICATION, + OBSConstants.DEFAULT_STRICT_HOSTNAME_VERIFICATION)); // sdk auth type negotiation enable - obsConf.setAuthTypeNegotiation( - conf.getBoolean(OBSConstants.SDK_AUTH_TYPE_NEGOTIATION_ENABLE, - OBSConstants.DEFAULT_SDK_AUTH_TYPE_NEGOTIATION_ENABLE)); + obsConf.setAuthTypeNegotiation(conf.getBoolean(OBSConstants.SDK_AUTH_TYPE_NEGOTIATION_ENABLE, + OBSConstants.DEFAULT_SDK_AUTH_TYPE_NEGOTIATION_ENABLE)); // set SDK AUTH TYPE to OBS when auth type negotiation unenabled if (!obsConf.isAuthTypeNegotiation()) { obsConf.setAuthType(AuthTypeEnum.OBS); } // okhttp retryOnConnectionFailure switch, default set to true - obsConf.retryOnConnectionFailureInOkhttp( - conf.getBoolean(OBSConstants.SDK_RETRY_ON_CONNECTION_FAILURE_ENABLE, - OBSConstants.DEFAULT_SDK_RETRY_ON_CONNECTION_FAILURE_ENABLE)); + obsConf.retryOnConnectionFailureInOkhttp(conf.getBoolean(OBSConstants.SDK_RETRY_ON_CONNECTION_FAILURE_ENABLE, + OBSConstants.DEFAULT_SDK_RETRY_ON_CONNECTION_FAILURE_ENABLE)); // sdk max retry times on unexpected end of stream exception, // default: -1,don't retry - int retryTime = conf.getInt( - OBSConstants.SDK_RETRY_TIMES_ON_UNEXPECTED_END_EXCEPTION, + int retryTime = conf.getInt(OBSConstants.SDK_RETRY_TIMES_ON_UNEXPECTED_END_EXCEPTION, OBSConstants.DEFAULT_SDK_RETRY_TIMES_ON_UNEXPECTED_END_EXCEPTION); - if (retryTime > 0 - && retryTime < OBSConstants.DEFAULT_MAX_SDK_CONNECTION_RETRY_TIMES + if (retryTime > 0 && retryTime < OBSConstants.DEFAULT_MAX_SDK_CONNECTION_RETRY_TIMES || !obsConf.isRetryOnConnectionFailureInOkhttp() && retryTime < 0) { retryTime = OBSConstants.DEFAULT_MAX_SDK_CONNECTION_RETRY_TIMES; } @@ -169,24 +146,18 @@ private static void initConnectionSettings(final Configuration conf, * @throws IllegalArgumentException if misconfigured * @throws IOException on any failure to initialize proxy */ - private static void initProxySupport(final Configuration conf, - final ExtObsConfiguration obsConf) + private static void initProxySupport(final Configuration conf, final ExtObsConfiguration obsConf) throws IllegalArgumentException, IOException { String proxyHost = conf.getTrimmed(OBSConstants.PROXY_HOST, ""); int proxyPort = conf.getInt(OBSConstants.PROXY_PORT, -1); if (!proxyHost.isEmpty() && proxyPort < 0) { - if (conf.getBoolean(OBSConstants.SECURE_CONNECTIONS, - OBSConstants.DEFAULT_SECURE_CONNECTIONS)) { - LOG.warn("Proxy host set without port. Using HTTPS default " - + OBSConstants.DEFAULT_HTTPS_PORT); - obsConf.getHttpProxy() - .setProxyPort(OBSConstants.DEFAULT_HTTPS_PORT); + if (conf.getBoolean(OBSConstants.SECURE_CONNECTIONS, OBSConstants.DEFAULT_SECURE_CONNECTIONS)) { + LOG.warn("Proxy host set without port. Using HTTPS default " + OBSConstants.DEFAULT_HTTPS_PORT); + obsConf.getHttpProxy().setProxyPort(OBSConstants.DEFAULT_HTTPS_PORT); } else { - LOG.warn("Proxy host set without port. Using HTTP default " - + OBSConstants.DEFAULT_HTTP_PORT); - obsConf.getHttpProxy() - .setProxyPort(OBSConstants.DEFAULT_HTTP_PORT); + LOG.warn("Proxy host set without port. Using HTTP default " + OBSConstants.DEFAULT_HTTP_PORT); + obsConf.getHttpProxy().setProxyPort(OBSConstants.DEFAULT_HTTP_PORT); } } String proxyUsername = conf.getTrimmed(OBSConstants.PROXY_USERNAME); @@ -196,23 +167,16 @@ private static void initProxySupport(final Configuration conf, proxyPassword = new String(proxyPass).trim(); } if ((proxyUsername == null) != (proxyPassword == null)) { - String msg = - "Proxy error: " + OBSConstants.PROXY_USERNAME + " or " - + OBSConstants.PROXY_PASSWORD - + " set without the other."; + String msg = "Proxy error: " + OBSConstants.PROXY_USERNAME + " or " + OBSConstants.PROXY_PASSWORD + + " set without the other."; LOG.error(msg); throw new IllegalArgumentException(msg); } - obsConf.setHttpProxy(proxyHost, proxyPort, proxyUsername, - proxyPassword); + obsConf.setHttpProxy(proxyHost, proxyPort, proxyUsername, proxyPassword); if (LOG.isDebugEnabled()) { - LOG.debug( - "Using proxy server {}:{} as user {} on " - + "domain {} as workstation {}", - obsConf.getHttpProxy().getProxyAddr(), - obsConf.getHttpProxy().getProxyPort(), - obsConf.getHttpProxy().getProxyUName(), - obsConf.getHttpProxy().getDomain(), + LOG.debug("Using proxy server {}:{} as user {} on " + "domain {} as workstation {}", + obsConf.getHttpProxy().getProxyAddr(), obsConf.getHttpProxy().getProxyPort(), + obsConf.getHttpProxy().getProxyUName(), obsConf.getHttpProxy().getDomain(), obsConf.getHttpProxy().getWorkstation()); } } @@ -226,43 +190,29 @@ private static void initProxySupport(final Configuration conf, * @return ObsClient client * @throws IOException on any failure to create Huawei OBS client */ - private static ObsClient createHuaweiObsClient(final Configuration conf, - final ObsConfiguration obsConf, final URI name) - throws IOException { + private static ObsClient createHuaweiObsClient(final Configuration conf, final ObsConfiguration obsConf, + final URI name) throws IOException { Class credentialsProviderClass; BasicSessionCredential credentialsProvider; ObsClient obsClient; try { - credentialsProviderClass = conf.getClass( - OBSConstants.OBS_CREDENTIALS_PROVIDER, null); + credentialsProviderClass = conf.getClass(OBSConstants.OBS_CREDENTIALS_PROVIDER, null); } catch (RuntimeException e) { Throwable c = e.getCause() != null ? e.getCause() : e; - throw new IOException( - "From option " + OBSConstants.OBS_CREDENTIALS_PROVIDER + ' ' - + c, c); + throw new IOException("From option " + OBSConstants.OBS_CREDENTIALS_PROVIDER + ' ' + c, c); } if (credentialsProviderClass == null) { - return createObsClientWithoutCredentialsProvider(conf, obsConf, - name); + return createObsClientWithoutCredentialsProvider(conf, obsConf, name); } try { - Constructor cons = - credentialsProviderClass.getDeclaredConstructor(URI.class, - Configuration.class); - credentialsProvider = (BasicSessionCredential) cons.newInstance( - name, conf); - } catch (NoSuchMethodException - | SecurityException - | IllegalAccessException - | InstantiationException - | InvocationTargetException e) { + Constructor cons = credentialsProviderClass.getDeclaredConstructor(URI.class, Configuration.class); + credentialsProvider = (BasicSessionCredential) cons.newInstance(name, conf); + } catch (NoSuchMethodException | SecurityException | IllegalAccessException | InstantiationException | InvocationTargetException e) { Throwable c = e.getCause() != null ? e.getCause() : e; - throw new IOException( - "From option " + OBSConstants.OBS_CREDENTIALS_PROVIDER + ' ' - + c, c); + throw new IOException("From option " + OBSConstants.OBS_CREDENTIALS_PROVIDER + ' ' + c, c); } String sessionToken = credentialsProvider.getSessionToken(); @@ -278,12 +228,10 @@ private static ObsClient createHuaweiObsClient(final Configuration conf, return obsClient; } - private static ObsClient createObsClientWithoutCredentialsProvider( - final Configuration conf, final ObsConfiguration obsConf, - final URI name) throws IOException { + private static ObsClient createObsClientWithoutCredentialsProvider(final Configuration conf, + final ObsConfiguration obsConf, final URI name) throws IOException { ObsClient obsClient; - OBSLoginHelper.Login creds = OBSCommonUtils.getOBSAccessKeys(name, - conf); + OBSLoginHelper.Login creds = OBSCommonUtils.getOBSAccessKeys(name, conf); String ak = creds.getUser(); String sk = creds.getPassword(); @@ -299,15 +247,11 @@ private static ObsClient createObsClientWithoutCredentialsProvider( Class securityProviderClass; try { - securityProviderClass = conf.getClass( - OBSConstants.OBS_SECURITY_PROVIDER, null); - LOG.info("From option {} get {}", - OBSConstants.OBS_SECURITY_PROVIDER, securityProviderClass); + securityProviderClass = conf.getClass(OBSConstants.OBS_SECURITY_PROVIDER, null); + LOG.info("From option {} get {}", OBSConstants.OBS_SECURITY_PROVIDER, securityProviderClass); } catch (RuntimeException e) { Throwable c = e.getCause() != null ? e.getCause() : e; - throw new IOException( - "From option " + OBSConstants.OBS_SECURITY_PROVIDER + ' ' + c, - c); + throw new IOException("From option " + OBSConstants.OBS_SECURITY_PROVIDER + ' ' + c, c); } if (securityProviderClass == null) { @@ -317,36 +261,26 @@ private static ObsClient createObsClientWithoutCredentialsProvider( IObsCredentialsProvider securityProvider; try { - Optional cons = tryGetConstructor( - securityProviderClass, + Optional cons = tryGetConstructor(securityProviderClass, new Class[] {URI.class, Configuration.class}); if (cons.isPresent()) { - securityProvider = (IObsCredentialsProvider) cons.get() - .newInstance(name, conf); + securityProvider = (IObsCredentialsProvider) cons.get().newInstance(name, conf); } else { - securityProvider - = (IObsCredentialsProvider) securityProviderClass - .getDeclaredConstructor().newInstance(); + securityProvider = (IObsCredentialsProvider) securityProviderClass.getDeclaredConstructor() + .newInstance(); } - } catch (NoSuchMethodException - | IllegalAccessException - | InstantiationException - | InvocationTargetException - | RuntimeException e) { + } catch (NoSuchMethodException | IllegalAccessException | InstantiationException | InvocationTargetException | RuntimeException e) { Throwable c = e.getCause() != null ? e.getCause() : e; - throw new IOException( - "From option " + OBSConstants.OBS_SECURITY_PROVIDER + ' ' + c, - c); + throw new IOException("From option " + OBSConstants.OBS_SECURITY_PROVIDER + ' ' + c, c); } obsClient = new ObsClient(securityProvider, obsConf); return obsClient; } - public static Optional tryGetConstructor(final Class mainClss, - final Class[] args) { + public static Optional tryGetConstructor(final Class mainClss, final Class[] args) { try { Constructor constructor = mainClss.getDeclaredConstructor(args); return Optional.ofNullable(constructor); diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/Listing.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/Listing.java deleted file mode 100644 index db646bc..0000000 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/Listing.java +++ /dev/null @@ -1,576 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.obs; - -import com.obs.services.exception.ObsException; -import com.obs.services.model.ListObjectsRequest; -import com.obs.services.model.ObjectListing; -import com.obs.services.model.ObsObject; -import org.apache.hadoop.fs.*; -import org.slf4j.Logger; - -import java.io.IOException; -import java.util.ArrayList; -import java.util.List; -import java.util.ListIterator; -import java.util.NoSuchElementException; - -import static org.apache.hadoop.fs.obs.Constants.OBS_FOLDER_SUFFIX; -import static org.apache.hadoop.fs.obs.OBSUtils.*; - -/** Place for the OBS listing classes; keeps all the small classes under control. */ -public class Listing { - - /** A Path filter which accepts all filenames. */ - static final PathFilter ACCEPT_ALL = - new PathFilter() { - @Override - public boolean accept(Path file) { - return true; - } - - @Override - public String toString() { - return "ACCEPT_ALL"; - } - }; - - private static final Logger LOG = OBSFileSystem.LOG; - private final OBSFileSystem owner; - - public Listing(OBSFileSystem owner) { - this.owner = owner; - } - - /** - * Create a FileStatus iterator against a path, with a given list object request. - * - * @param listPath path of the listing - * @param request initial request to make - * @param filter the filter on which paths to accept - * @param acceptor the class/predicate to decide which entries to accept in the listing based on - * the full file status. - * @return the iterator - * @throws IOException IO Problems - */ - FileStatusListingIterator createFileStatusListingIterator( - Path listPath, - ListObjectsRequest request, - PathFilter filter, - Listing.FileStatusAcceptor acceptor) - throws IOException { - return new FileStatusListingIterator( - new ObjectListingIterator(listPath, request), filter, acceptor); - } - - /** - * Create a located status iterator over a file status iterator. - * - * @param statusIterator an iterator over the remote status entries - * @return a new remote iterator - */ - LocatedFileStatusIterator createLocatedFileStatusIterator( - RemoteIterator statusIterator) { - return new LocatedFileStatusIterator(statusIterator); - } - - /** - * Interface to implement by the logic deciding whether to accept a summary entry or path as a - * valid file or directory. - */ - interface FileStatusAcceptor { - - /** - * Predicate to decide whether or not to accept a summary entry. - * - * @param keyPath qualified path to the entry - * @param summary summary entry - * @return true if the entry is accepted (i.e. that a status entry should be generated. - */ - boolean accept(Path keyPath, ObsObject summary); - - /** - * Predicate to decide whether or not to accept a prefix. - * - * @param keyPath qualified path to the entry - * @param commonPrefix the prefix - * @return true if the entry is accepted (i.e. that a status entry should be generated.) - */ - boolean accept(Path keyPath, String commonPrefix); - } - - /** - * A remote iterator which only iterates over a single `LocatedFileStatus` value. - * - *

If the status value is null, the iterator declares that it has no data. This iterator is - * used to handle {@link OBSFileSystem#listStatus(Path)} calls where the path handed in refers to - * a file, not a directory: this is the iterator returned. - */ - static final class SingleStatusRemoteIterator implements RemoteIterator { - - /** The status to return; set to null after the first iteration. */ - private LocatedFileStatus status; - - /** - * Constructor. - * - * @param status status value: may be null, in which case the iterator is empty. - */ - public SingleStatusRemoteIterator(LocatedFileStatus status) { - this.status = status; - } - - /** - * {@inheritDoc} - * - * @return true if there is a file status to return: this is always false for the second - * iteration, and may be false for the first. - * @throws IOException never - */ - @Override - public boolean hasNext() throws IOException { - return status != null; - } - - /** - * {@inheritDoc} - * - * @return the non-null status element passed in when the instance was constructed, if it ha not - * already been retrieved. - * @throws IOException never - * @throws NoSuchElementException if this is the second call, or it is the first call and a null - * {@link LocatedFileStatus} entry was passed to the constructor. - */ - @Override - public LocatedFileStatus next() throws IOException { - if (hasNext()) { - LocatedFileStatus s = this.status; - status = null; - return s; - } else { - throw new NoSuchElementException(); - } - } - } - - /** - * Accept all entries except the base path and those which map to OBS pseudo directory markers. - */ - static class AcceptFilesOnly implements FileStatusAcceptor { - private final Path qualifiedPath; - - public AcceptFilesOnly(Path qualifiedPath) { - this.qualifiedPath = qualifiedPath; - } - - /** - * Reject a summary entry if the key path is the qualified Path, or it ends with {@code - * "_$folder$"}. - * - * @param keyPath key path of the entry - * @param summary summary entry - * @return true if the entry is accepted (i.e. that a status entry should be generated. - */ - @Override - public boolean accept(Path keyPath, ObsObject summary) { - return !keyPath.equals(qualifiedPath) - && !summary.getObjectKey().endsWith(OBS_FOLDER_SUFFIX) - && !objectRepresentsDirectory( - summary.getObjectKey(), summary.getMetadata().getContentLength()); - } - - /** - * Accept no directory paths. - * - * @param keyPath qualified path to the entry - * @param prefix common prefix in listing. - * @return false, always. - */ - @Override - public boolean accept(Path keyPath, String prefix) { - return false; - } - } - - /** - * Accept all entries except the base path and those which map to OBS pseudo directory markers. - */ - static class AcceptAllButSelfAndS3nDirs implements FileStatusAcceptor { - - /** Base path. */ - private final Path qualifiedPath; - - /** - * Constructor. - * - * @param qualifiedPath an already-qualified path. - */ - public AcceptAllButSelfAndS3nDirs(Path qualifiedPath) { - this.qualifiedPath = qualifiedPath; - } - - /** - * Reject a summary entry if the key path is the qualified Path, or it ends with {@code - * "_$folder$"}. - * - * @param keyPath key path of the entry - * @param summary summary entry - * @return true if the entry is accepted (i.e. that a status entry should be generated.) - */ - @Override - public boolean accept(Path keyPath, ObsObject summary) { - return !keyPath.equals(qualifiedPath) && !summary.getObjectKey().endsWith(OBS_FOLDER_SUFFIX); - } - - /** - * Accept all prefixes except the one for the base path, "self". - * - * @param keyPath qualified path to the entry - * @param prefix common prefix in listing. - * @return true if the entry is accepted (i.e. that a status entry should be generated. - */ - @Override - public boolean accept(Path keyPath, String prefix) { - return !keyPath.equals(qualifiedPath); - } - } - - /** - * Wraps up object listing into a remote iterator which will ask for more listing data if needed. - * - *

This is a complex operation, especially the process to determine if there are more entries - * remaining. If there are no more results remaining in the (filtered) results of the current - * listing request, then another request is made and those results filtered before the - * iterator can declare that there is more data available. - * - *

The need to filter the results precludes the iterator from simply declaring that if the - * {@link ObjectListingIterator#hasNext()} is true then there are more results. Instead the next - * batch of results must be retrieved and filtered. - * - *

What does this mean? It means that remote requests to retrieve new batches of object - * listings are made in the {@link #hasNext()} call; the {@link #next()} call simply returns the - * filtered results of the last listing processed. However, do note that {@link #next()} calls - * {@link #hasNext()} during its operation. This is critical to ensure that a listing obtained - * through a sequence of {@link #next()} will complete with the same set of results as a classic - * {@code while(it.hasNext()} loop. - * - *

Thread safety: None. - */ - class FileStatusListingIterator implements RemoteIterator { - - /** Source of objects. */ - private final ObjectListingIterator source; - /** Filter of paths from API call. */ - private final PathFilter filter; - /** Filter of entries from file status. */ - private final FileStatusAcceptor acceptor; - /** request batch size. */ - private int batchSize; - /** Iterator over the current set of results. */ - private ListIterator statusBatchIterator; - - /** - * Create an iterator over file status entries. - * - * @param source the listing iterator from a listObjects call. - * @param filter the filter on which paths to accept - * @param acceptor the class/predicate to decide which entries to accept in the listing based on - * the full file status. - * @throws IOException IO Problems - */ - FileStatusListingIterator( - ObjectListingIterator source, PathFilter filter, FileStatusAcceptor acceptor) - throws IOException { - this.source = source; - this.filter = filter; - this.acceptor = acceptor; - // build the first set of results. This will not trigger any - // remote IO, assuming the source iterator is in its initial - // iteration - requestNextBatch(); - } - - /** - * Report whether or not there is new data available. If there is data in the local filtered - * list, return true. Else: request more data util that condition is met, or there is no more - * remote listing data. - * - * @return true if a call to {@link #next()} will succeed. - * @throws IOException - */ - @Override - public boolean hasNext() throws IOException { - return statusBatchIterator.hasNext() || requestNextBatch(); - } - - @Override - public FileStatus next() throws IOException { - if (!hasNext()) { - throw new NoSuchElementException(); - } - return statusBatchIterator.next(); - } - - /** - * Try to retrieve another batch. Note that for the initial batch, {@link ObjectListingIterator} - * does not generate a request; it simply returns the initial set. - * - * @return true if a new batch was created. - * @throws IOException IO problems - */ - private boolean requestNextBatch() throws IOException { - // look for more object listing batches being available - while (source.hasNext()) { - // if available, retrieve it and build the next status - if (buildNextStatusBatch(source.next())) { - // this batch successfully generated entries matching the filters/ - // acceptors; declare that the request was successful - return true; - } else { - LOG.debug("All entries in batch were filtered...continuing"); - } - } - // if this code is reached, it means that all remaining - // object lists have been retrieved, and there are no new entries - // to return. - return false; - } - - /** - * Build the next status batch from a listing. - * - * @param objects the next object listing - * @return true if this added any entries after filtering - */ - private boolean buildNextStatusBatch(ObjectListing objects) { - // counters for debug logs - int added = 0, ignored = 0; - // list to fill in with results. Initial size will be list maximum. - List stats = - new ArrayList<>(objects.getObjects().size() + objects.getCommonPrefixes().size()); - // objects - for (ObsObject summary : objects.getObjects()) { - String key = summary.getObjectKey(); - Path keyPath = owner.keyToQualifiedPath(key); - if (LOG.isDebugEnabled()) { - LOG.debug("{}: {}", keyPath, stringify(summary)); - } - // Skip over keys that are ourselves and old OBS _$folder$ files - if (acceptor.accept(keyPath, summary) && filter.accept(keyPath)) { - FileStatus status = - createFileStatus( - keyPath, summary, owner.getDefaultBlockSize(keyPath), owner.getUsername()); - LOG.debug("Adding: {}", status); - stats.add(status); - added++; - } else { - LOG.debug("Ignoring: {}", keyPath); - ignored++; - } - } - - // prefixes: always directories - for (String prefix : objects.getCommonPrefixes()) { - Path keyPath = owner.keyToQualifiedPath(prefix); - if (acceptor.accept(keyPath, prefix) && filter.accept(keyPath)) { - FileStatus status = new OBSFileStatus(false, keyPath, owner.getUsername()); - LOG.debug("Adding directory: {}", status); - added++; - stats.add(status); - } else { - LOG.debug("Ignoring directory: {}", keyPath); - ignored++; - } - } - - // finish up - batchSize = stats.size(); - statusBatchIterator = stats.listIterator(); - boolean hasNext = statusBatchIterator.hasNext(); - LOG.debug( - "Added {} entries; ignored {}; hasNext={}; hasMoreObjects={}", - added, - ignored, - hasNext, - objects.isTruncated()); - return hasNext; - } - - /** - * Get the number of entries in the current batch. - * - * @return a number, possibly zero. - */ - public int getBatchSize() { - return batchSize; - } - } - - /** - * Wraps up OBS `ListObjects` requests in a remote iterator which will ask for more listing data - * if needed. - * - *

That is: - * - *

1. The first invocation of the {@link #next()} call will return the results of the first - * request, the one created during the construction of the instance. - * - *

2. Second and later invocations will continue the ongoing listing, calling {@link # - * OBSFileSystem#continueListObjects(ObjectListing)} to request the next batch of results. - * - *

3. The {@link #hasNext()} predicate returns true for the initial call, where {@link #next()} - * will return the initial results. It declares that it has future results iff the last executed - * request was truncated. - * - *

Thread safety: none. - */ - class ObjectListingIterator implements RemoteIterator { - - /** The path listed. */ - private final Path listPath; - - /** The most recent listing results. */ - private ObjectListing objects; - - /** Indicator that this is the first listing. */ - private boolean firstListing = true; - - /** Count of how many listings have been requested (including initial result). */ - private int listingCount = 1; - - /** Maximum keys in a request. */ - private int maxKeys; - - /** delimiter for listing objects. */ - private String delimiter; - - /** - * Constructor -calls `listObjects()` on the request to populate the initial set of results/fail - * if there was a problem talking to the bucket. - * - * @param listPath path of the listing - * @param request initial request to make - */ - ObjectListingIterator(Path listPath, ListObjectsRequest request) { - this.listPath = listPath; - this.maxKeys = owner.getMaxKeys(); - this.delimiter = request.getDelimiter(); - this.objects = owner.listObjects(request); - } - - /** - * Declare that the iterator has data if it is either is the initial iteration or it is a later - * one and the last listing obtained was incomplete. - * - * @throws IOException never: there is no IO in this operation. - */ - @Override - public boolean hasNext() throws IOException { - return firstListing || objects.isTruncated(); - } - - /** - * Ask for the next listing. For the first invocation, this returns the initial set, with no - * remote IO. For later requests, OBS will be queried, hence the calls may block or fail. - * - * @return the next object listing. - * @throws IOException if a query made of OBS fails. - * @throws NoSuchElementException if there is no more data to list. - */ - @Override - public ObjectListing next() throws IOException { - if (firstListing) { - // on the first listing, don't request more data. - // Instead just clear the firstListing flag so that it future calls - // will request new data. - firstListing = false; - } else { - try { - if (!objects.isTruncated()) { - // nothing more to request: fail. - throw new NoSuchElementException("No more results in listing of " + listPath); - } - // need to request a new set of objects. - LOG.debug("[{}], Requesting next {} objects under {}", listingCount, maxKeys, listPath); - objects = owner.continueListObjects(objects); - listingCount++; - LOG.debug("New listing status: {}", this); - } catch (ObsException e) { - throw translateException("listObjects()", listPath, e); - } - } - return objects; - } - - @Override - public String toString() { - return "Object listing iterator against " - + listPath - + "; listing count " - + listingCount - + "; isTruncated=" - + objects.isTruncated(); - } - - /** - * Get the path listed. - * - * @return the path used in this listing. - */ - public Path getListPath() { - return listPath; - } - - /** - * Get the count of listing requests. - * - * @return the counter of requests made (including the initial lookup). - */ - public int getListingCount() { - return listingCount; - } - } - - /** - * Take a remote iterator over a set of {@link FileStatus} instances and return a remote iterator - * of {@link LocatedFileStatus} instances. - */ - class LocatedFileStatusIterator implements RemoteIterator { - private final RemoteIterator statusIterator; - - /** - * Constructor. - * - * @param statusIterator an iterator over the remote status entries - */ - LocatedFileStatusIterator(RemoteIterator statusIterator) { - this.statusIterator = statusIterator; - } - - @Override - public boolean hasNext() throws IOException { - return statusIterator.hasNext(); - } - - @Override - public LocatedFileStatus next() throws IOException { - return owner.toLocatedFileStatus(statusIterator.next()); - } - } -} diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/MultiReadTask.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/MultiReadTask.java deleted file mode 100644 index 41c2b0d..0000000 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/MultiReadTask.java +++ /dev/null @@ -1,120 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.fs.obs; - -import com.google.common.io.ByteStreams; -import com.obs.services.ObsClient; -import com.obs.services.internal.io.UnrecoverableIOException; -import com.obs.services.model.GetObjectRequest; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -import java.io.IOException; -import java.io.InputStream; -import java.io.InterruptedIOException; -import java.util.concurrent.Callable; - -public class MultiReadTask implements Callable { - - private static final Logger LOG = LoggerFactory.getLogger(MultiReadTask.class); - private final int RETRY_TIME = 3; - private String bucket; - private String key; - private ObsClient client; - private ReadBuffer readBuffer; - private OBSFileSystem fs; - - public MultiReadTask(OBSFileSystem fs, String bucket, String key, ObsClient client, ReadBuffer readBuffer) { - this.fs = fs; - this.bucket = bucket; - this.key = key; - this.client = client; - this.readBuffer = readBuffer; - } - - @Override - public Void call() throws Exception { - GetObjectRequest request = new GetObjectRequest(bucket, key); - request.setRangeStart(readBuffer.getStart()); - request.setRangeEnd(readBuffer.getEnd()); - if (fs.getSse().isSseCEnable()) { - request.setSseCHeader(fs.getSse().getSseCHeader()); - } - InputStream stream = null; - readBuffer.setState(ReadBuffer.STATE.ERROR); - - boolean interrupted = false; - - for (int i = 0; i < RETRY_TIME; i++) { - try { - if (Thread.interrupted()) { - throw new InterruptedException("Interrupted read task"); - } - stream = client.getObject(request).getObjectContent(); - ByteStreams.readFully( - stream, - readBuffer.getBuffer(), - 0, - (int) (readBuffer.getEnd() - readBuffer.getStart() + 1)); - readBuffer.setState(ReadBuffer.STATE.FINISH); - - return null; - } catch (IOException e) { - if (e instanceof InterruptedIOException) { - // LOG.info("Buffer closed, task abort"); - interrupted = true; - throw e; - } - LOG.warn("IOException occurred in Read task", e); - readBuffer.setState(ReadBuffer.STATE.ERROR); - if (i == RETRY_TIME - 1) { - throw e; - } - } catch (Exception e) { - readBuffer.setState(ReadBuffer.STATE.ERROR); - if (e instanceof UnrecoverableIOException) { - // LOG.info("Buffer closed, task abort"); - interrupted = true; - throw e; - } else { - LOG.warn("Exception occurred in Read task", e); - if (i == RETRY_TIME - 1) { - throw e; - } - } - } finally { - // Return to connection pool - if (stream != null) { - stream.close(); - } - - // SLEEP - if (!interrupted && readBuffer.getState() != ReadBuffer.STATE.FINISH) { - try { - Thread.sleep(3000); - } catch (InterruptedException e) { - // TODO - interrupted = true; - throw e; - } - } - } - } - return null; - } -} diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBS.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBS.java index af04ee1..98d172f 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBS.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBS.java @@ -41,8 +41,7 @@ public final class OBS extends DelegateToFileSystem { * @throws IOException on any failure to initialize this instance * @throws URISyntaxException theUri has syntax error */ - public OBS(final URI theUri, final Configuration conf) - throws IOException, URISyntaxException { + public OBS(final URI theUri, final Configuration conf) throws IOException, URISyntaxException { super(theUri, new OBSFileSystem(), conf, "obs", false); } diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSBlockOutputStream.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSBlockOutputStream.java index f82acc7..27aabfa 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSBlockOutputStream.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSBlockOutputStream.java @@ -67,8 +67,7 @@ class OBSBlockOutputStream extends OutputStream implements Syncable { /** * Class logger. */ - private static final Logger LOG = LoggerFactory.getLogger( - OBSBlockOutputStream.class); + private static final Logger LOG = LoggerFactory.getLogger(OBSBlockOutputStream.class); /** * Owner FileSystem. @@ -162,13 +161,8 @@ class OBSBlockOutputStream extends OutputStream implements Syncable { * @param isAppendable if append is supported * @throws IOException on any problem */ - OBSBlockOutputStream( - final OBSFileSystem owner, - final String obsObjectKey, - final long objLen, - final ExecutorService execService, - final boolean isAppendable) - throws IOException { + OBSBlockOutputStream(final OBSFileSystem owner, final String obsObjectKey, final long objLen, + final ExecutorService execService, final boolean isAppendable) throws IOException { this.appendAble = new AtomicBoolean(isAppendable); this.fs = owner; this.key = obsObjectKey; @@ -177,19 +171,14 @@ class OBSBlockOutputStream extends OutputStream implements Syncable { this.blockFactory = owner.getBlockFactory(); this.blockSize = (int) owner.getPartSize(); this.writeOperationHelper = owner.getWriteHelper(); - Preconditions.checkArgument( - owner.getPartSize() >= OBSConstants.MULTIPART_MIN_SIZE, + Preconditions.checkArgument(owner.getPartSize() >= OBSConstants.MULTIPART_MIN_SIZE, "Block size is too small: %d", owner.getPartSize()); - this.executorService = MoreExecutors.listeningDecorator( - execService); + this.executorService = MoreExecutors.listeningDecorator(execService); this.multiPartUpload = null; // create that first block. This guarantees that an open + close // sequence writes a 0-byte entry. createBlockIfNeeded(); - LOG.debug( - "Initialized OBSBlockOutputStream for {}" + " output to {}", - owner.getWriteHelper(), - activeBlock); + LOG.debug("Initialized OBSBlockOutputStream for {}" + " output to {}", owner.getWriteHelper(), activeBlock); } /** @@ -198,15 +187,12 @@ class OBSBlockOutputStream extends OutputStream implements Syncable { * @return the active block; null if there isn't one. * @throws IOException on any failure to create */ - private synchronized OBSDataBlocks.DataBlock createBlockIfNeeded() - throws IOException { + private synchronized OBSDataBlocks.DataBlock createBlockIfNeeded() throws IOException { if (activeBlock == null) { blockCount++; if (blockCount >= OBSConstants.MAX_MULTIPART_COUNT) { - LOG.debug( - "Number of partitions in stream exceeds limit for OBS: " - + OBSConstants.MAX_MULTIPART_COUNT - + " write may fail."); + LOG.debug("Number of partitions in stream exceeds limit for OBS: " + OBSConstants.MAX_MULTIPART_COUNT + + " write may fail."); } activeBlock = blockFactory.create(blockCount, this.blockSize); } @@ -258,8 +244,7 @@ private synchronized void clearActiveBlock() { */ private void checkStreamOpen() throws IOException { if (closed) { - throw new IOException( - uri + ": " + FSExceptionMessages.STREAM_IS_CLOSED); + throw new IOException(uri + ": " + FSExceptionMessages.STREAM_IS_CLOSED); } } @@ -307,16 +292,13 @@ public synchronized void write(final int b) throws IOException { * @throws IOException on any problem */ @Override - public synchronized void write(@NotNull final byte[] source, - final int offset, final int len) - throws IOException { + public synchronized void write(@NotNull final byte[] source, final int offset, final int len) throws IOException { fs.checkOpen(); checkStreamOpen(); long startTime = System.currentTimeMillis(); long endTime; if (hasException.get()) { - String closeWarning = String.format( - "write has error. bs : pre upload obs[%s] has error.", key); + String closeWarning = String.format("write has error. bs : pre upload obs[%s] has error.", key); LOG.warn(closeWarning); throw new IOException(closeWarning); } @@ -332,45 +314,35 @@ public synchronized void write(@NotNull final byte[] source, innerWrite(source, offset, len, written, remainingCapacity); endTime = System.currentTimeMillis(); if (fs.getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - null, BasicMetricsConsumer.MetricRecord.WRITE, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.WRITE, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(fs, record); } } catch (IOException e) { endTime = System.currentTimeMillis(); if (fs.getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - null, BasicMetricsConsumer.MetricRecord.WRITE, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.WRITE, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(fs, record); } - LOG.error( - "Write data for key {} of bucket {} error, error message {}", - key, fs.getBucket(), + LOG.error("Write data for key {} of bucket {} error, error message {}", key, fs.getBucket(), e.getMessage()); throw e; } } - private synchronized void innerWrite(final byte[] source, final int offset, - final int len, - final int written, final int remainingCapacity) - throws IOException { + private synchronized void innerWrite(final byte[] source, final int offset, final int len, final int written, + final int remainingCapacity) throws IOException { if (written < len) { // not everything was written —the block has run out // of capacity // Trigger an upload then process the remainder. - LOG.debug( - "writing more data than block has capacity -triggering upload"); + LOG.debug("writing more data than block has capacity -triggering upload"); if (appendAble.get()) { // to write a buffer then append to obs LOG.debug("[Append] open stream and single write size {} " - + "greater than buffer size {}, append buffer to obs.", - len, blockSize); + + "greater than buffer size {}, append buffer to obs.", len, blockSize); flushCurrentBlock(); } else { // block output stream logic, multi-part upload @@ -385,8 +357,7 @@ private synchronized void innerWrite(final byte[] source, final int offset, if (appendAble.get()) { // to write a buffer then append to obs LOG.debug("[Append] open stream and already write size " - + "equal to buffer size {}, append buffer to obs.", - blockSize); + + "equal to buffer size {}, append buffer to obs.", blockSize); flushCurrentBlock(); } else { // block output stream logic, multi-part upload @@ -414,8 +385,7 @@ private synchronized void uploadCurrentBlock() throws IOException { multiPartUpload.uploadBlockAsync(getActiveBlock()); } catch (IOException e) { hasException.set(true); - LOG.error("Upload current block on ({}/{}) failed.", fs.getBucket(), - key, e); + LOG.error("Upload current block on ({}/{}) failed.", fs.getBucket(), key, e); throw e; } finally { // set the block to null, so the next write will create a new block. @@ -443,16 +413,13 @@ public synchronized void close() throws IOException { } if (hasException.get()) { - String closeWarning = String.format( - "closed has error. bs : pre write obs[%s] has error.", key); + String closeWarning = String.format("closed has error. bs : pre write obs[%s] has error.", key); LOG.warn(closeWarning); endTime = System.currentTimeMillis(); if (fs.getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.OUTPUT, - BasicMetricsConsumer.MetricRecord.CLOSE, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.OUTPUT, BasicMetricsConsumer.MetricRecord.CLOSE, false, + endTime - startTime); OBSCommonUtils.setMetricsInfo(fs, record); } throw new IOException(closeWarning); @@ -472,11 +439,9 @@ public synchronized void close() throws IOException { fs.removeFileBeingWritten(key); endTime = System.currentTimeMillis(); if (fs.getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.OUTPUT, - BasicMetricsConsumer.MetricRecord.CLOSE, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.OUTPUT, BasicMetricsConsumer.MetricRecord.CLOSE, true, + endTime - startTime); OBSCommonUtils.setMetricsInfo(fs, record); } closed = true; @@ -488,8 +453,7 @@ public synchronized void close() throws IOException { * @throws IOException any problem in append or put object */ private synchronized void putObjectIfNeedAppend() throws IOException { - if (appendAble.get() && fs.exists( - OBSCommonUtils.keyToQualifiedPath(fs, key))) { + if (appendAble.get() && fs.exists(OBSCommonUtils.keyToQualifiedPath(fs, key))) { appendFsFile(); } else { putObject(); @@ -506,11 +470,9 @@ private synchronized void appendFsFile() throws IOException { final OBSDataBlocks.DataBlock block = getActiveBlock(); WriteFileRequest writeFileReq; if (block instanceof OBSDataBlocks.DiskBlock) { - writeFileReq = OBSCommonUtils.newAppendFileRequest(fs, key, - objectLen, (File) block.startUpload()); + writeFileReq = OBSCommonUtils.newAppendFileRequest(fs, key, objectLen, (File) block.startUpload()); } else { - writeFileReq = OBSCommonUtils.newAppendFileRequest(fs, key, - objectLen, (InputStream) block.startUpload()); + writeFileReq = OBSCommonUtils.newAppendFileRequest(fs, key, objectLen, (InputStream) block.startUpload()); } OBSCommonUtils.appendFile(fs, writeFileReq); objectLen += block.dataSize(); @@ -524,21 +486,17 @@ private synchronized void appendFsFile() throws IOException { * @throws IOException any problem. */ private synchronized void putObject() throws IOException { - LOG.debug("Executing regular upload for {}", - writeOperationHelper.toString(key)); + LOG.debug("Executing regular upload for {}", writeOperationHelper.toString(key)); final OBSDataBlocks.DataBlock block = getActiveBlock(); clearActiveBlock(); final int size = block.dataSize(); final PutObjectRequest putObjectRequest; if (block instanceof OBSDataBlocks.DiskBlock) { - putObjectRequest = writeOperationHelper.newPutRequest(key, - (File) block.startUpload()); + putObjectRequest = writeOperationHelper.newPutRequest(key, (File) block.startUpload()); } else { - putObjectRequest = - writeOperationHelper.newPutRequest(key, - (InputStream) block.startUpload(), size); + putObjectRequest = writeOperationHelper.newPutRequest(key, (InputStream) block.startUpload(), size); } putObjectRequest.setAcl(fs.getCannedACL()); @@ -581,10 +539,8 @@ public synchronized void hflush() throws IOException { long endTime = System.currentTimeMillis(); if (fs.getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - null, BasicMetricsConsumer.MetricRecord.HFLUSH, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.HFLUSH, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(fs, record); } } @@ -599,9 +555,7 @@ private synchronized void flushOrSync() throws IOException { checkStreamOpen(); if (hasException.get()) { - String flushWarning = String.format( - "flushOrSync has error. bs : pre write obs[%s] has error.", - key); + String flushWarning = String.format("flushOrSync has error. bs : pre write obs[%s] has error.", key); LOG.warn(flushWarning); throw new IOException(flushWarning); } @@ -632,9 +586,7 @@ private synchronized void clearHFlushOrSync() { * @param hasBlock jungle if has block * @throws IOException io exception */ - private synchronized void uploadWriteBlocks( - final OBSDataBlocks.DataBlock block, - final boolean hasBlock) + private synchronized void uploadWriteBlocks(final OBSDataBlocks.DataBlock block, final boolean hasBlock) throws IOException { if (multiPartUpload == null) { if (hasBlock) { @@ -651,8 +603,7 @@ private synchronized void uploadWriteBlocks( uploadCurrentBlock(); } // wait for the partial uploads to finish - final List> partETags - = multiPartUpload.waitForAllPartUploads(); + final List> partETags = multiPartUpload.waitForAllPartUploads(); List listPartETags = new ArrayList<>(); int countSize = 0; for (Pair pair : partETags) { @@ -670,17 +621,14 @@ private synchronized void uploadWriteBlocks( private synchronized void completeCurrentBlock() throws IOException { OBSDataBlocks.DataBlock block = getActiveBlock(); boolean hasBlock = hasActiveBlock(); - LOG.debug("{}: complete block #{}: current block= {}", this, blockCount, - hasBlock ? block : "(none)"); + LOG.debug("{}: complete block #{}: current block= {}", this, blockCount, hasBlock ? block : "(none)"); try { uploadWriteBlocks(block, hasBlock); } catch (IOException ioe) { - LOG.error("Upload data to obs error. io exception : {}", - ioe.getMessage()); + LOG.error("Upload data to obs error. io exception : {}", ioe.getMessage()); throw ioe; } catch (Exception e) { - LOG.error("Upload data to obs error. other exception : {}", - e.getMessage()); + LOG.error("Upload data to obs error. other exception : {}", e.getMessage()); throw e; } finally { OBSCommonUtils.closeAll(block); @@ -691,19 +639,15 @@ private synchronized void completeCurrentBlock() throws IOException { private synchronized void flushCurrentBlock() throws IOException { OBSDataBlocks.DataBlock block = getActiveBlock(); boolean hasBlock = hasActiveBlock(); - LOG.debug( - "{}: complete block #{}: current block= {}", this, blockCount, - hasBlock ? block : "(none)"); + LOG.debug("{}: complete block #{}: current block= {}", this, blockCount, hasBlock ? block : "(none)"); try { uploadWriteBlocks(block, hasBlock); } catch (IOException ioe) { - LOG.error("hflush data to obs error. io exception : {}", - ioe.getMessage()); + LOG.error("hflush data to obs error. io exception : {}", ioe.getMessage()); hasException.set(true); throw ioe; } catch (Exception e) { - LOG.error("hflush data to obs error. other exception : {}", - e.getMessage()); + LOG.error("hflush data to obs error. other exception : {}", e.getMessage()); hasException.set(true); throw e; } finally { @@ -720,10 +664,8 @@ public synchronized void hsync() throws IOException { flushOrSync(); long endTime = System.currentTimeMillis(); if (fs.getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - null, BasicMetricsConsumer.MetricRecord.HFLUSH, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.HFLUSH, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(fs, record); } } @@ -740,18 +682,13 @@ private class MultiPartUpload { /** * List for async part upload future. */ - private final List>> - partETagsFutures; + private final List>> partETagsFutures; MultiPartUpload() throws IOException { this.uploadId = writeOperationHelper.initiateMultiPartUpload(key); this.partETagsFutures = new ArrayList<>(2); - LOG.debug( - "Initiated multi-part upload for {} with , the key is {}" - + "id '{}'", - writeOperationHelper, - uploadId, - key); + LOG.debug("Initiated multi-part upload for {} with , the key is {}" + "id '{}'", writeOperationHelper, + uploadId, key); } /** @@ -760,64 +697,46 @@ private class MultiPartUpload { * @param block block to upload * @throws IOException upload failure */ - private void uploadBlockAsync(final OBSDataBlocks.DataBlock block) - throws IOException { + private void uploadBlockAsync(final OBSDataBlocks.DataBlock block) throws IOException { LOG.debug("Queueing upload of {}", block); final int size = block.dataSize(); final int currentPartNumber = partETagsFutures.size() + 1; final UploadPartRequest request; if (block instanceof OBSDataBlocks.DiskBlock) { - request = writeOperationHelper.newUploadPartRequest( - key, - uploadId, - currentPartNumber, - size, + request = writeOperationHelper.newUploadPartRequest(key, uploadId, currentPartNumber, size, (File) block.startUpload()); } else { - request = writeOperationHelper.newUploadPartRequest( - key, - uploadId, - currentPartNumber, - size, + request = writeOperationHelper.newUploadPartRequest(key, uploadId, currentPartNumber, size, (InputStream) block.startUpload()); } - ListenableFuture> partETagFuture - = executorService.submit(() -> { - // this is the queued upload operation - LOG.debug("Uploading part {} for id '{}'", - currentPartNumber, uploadId); - // do the upload - PartEtag partETag = null; - try { - if (mockUploadPartError) { - throw new ObsException("mock upload part error"); - } - UploadPartResult uploadPartResult - = OBSCommonUtils.uploadPart(fs, request); - partETag = - new PartEtag(uploadPartResult.getEtag(), - uploadPartResult.getPartNumber()); - if (LOG.isDebugEnabled()) { - LOG.debug("Completed upload of {} to part {}", - block, partETag); - } - } catch (ObsException e) { - // catch all exception - hasException.set(true); - IOException ioException = - OBSCommonUtils.translateException("UploadPart", key, - e); - LOG.error("UploadPart failed (ObsException). {}", - ioException.getMessage()); - throw ioException; - } finally { - // close the stream and block - OBSCommonUtils.closeAll(block); + ListenableFuture> partETagFuture = executorService.submit(() -> { + // this is the queued upload operation + LOG.debug("Uploading part {} for id '{}'", currentPartNumber, uploadId); + // do the upload + PartEtag partETag = null; + try { + if (mockUploadPartError) { + throw new ObsException("mock upload part error"); + } + UploadPartResult uploadPartResult = OBSCommonUtils.uploadPart(fs, request); + partETag = new PartEtag(uploadPartResult.getEtag(), uploadPartResult.getPartNumber()); + if (LOG.isDebugEnabled()) { + LOG.debug("Completed upload of {} to part {}", block, partETag); } - return new Pair(partETag, size); - }); + } catch (ObsException e) { + // catch all exception + hasException.set(true); + IOException ioException = OBSCommonUtils.translateException("UploadPart", key, e); + LOG.error("UploadPart failed (ObsException). {}", ioException.getMessage()); + throw ioException; + } finally { + // close the stream and block + OBSCommonUtils.closeAll(block); + } + return new Pair(partETag, size); + }); partETagsFutures.add(partETagFuture); } @@ -827,10 +746,8 @@ private void uploadBlockAsync(final OBSDataBlocks.DataBlock block) * @return list of results * @throws IOException IO Problems */ - private List> waitForAllPartUploads() - throws IOException { - LOG.debug("Waiting for {} uploads to complete", - partETagsFutures.size()); + private List> waitForAllPartUploads() throws IOException { + LOG.debug("Waiting for {} uploads to complete", partETagsFutures.size()); try { return Futures.allAsList(partETagsFutures).get(); } catch (InterruptedException ie) { @@ -841,9 +758,7 @@ private List> waitForAllPartUploads() } // abort multipartupload this.abort(); - throw new IOException( - "Interrupted multi-part upload with id '" + uploadId - + "' to " + key); + throw new IOException("Interrupted multi-part upload with id '" + uploadId + "' to " + key); } catch (ExecutionException ee) { // there is no way of recovering so abort // cancel all partUploads @@ -854,9 +769,8 @@ private List> waitForAllPartUploads() } // abort multipartupload this.abort(); - throw OBSCommonUtils.extractException( - "Multi-part upload with id '" + uploadId + "' to " + key, - key, ee); + throw OBSCommonUtils.extractException("Multi-part upload with id '" + uploadId + "' to " + key, key, + ee); } } @@ -868,16 +782,13 @@ private List> waitForAllPartUploads() * @return result for completing multipart upload * @throws IOException on any problem */ - private CompleteMultipartUploadResult complete( - final List partETags) throws IOException { + private CompleteMultipartUploadResult complete(final List partETags) throws IOException { String operation = String.format( - "Completing multi-part upload for key '%s'," - + " id '%s' with %s partitions ", - key, uploadId, partETags.size()); + "Completing multi-part upload for key '%s'," + " id '%s' with %s partitions ", key, uploadId, + partETags.size()); try { LOG.debug(operation); - return writeOperationHelper.completeMultipartUpload(key, - uploadId, partETags); + return writeOperationHelper.completeMultipartUpload(key, uploadId, partETags); } catch (ObsException e) { throw OBSCommonUtils.translateException(operation, key, e); } @@ -889,18 +800,13 @@ private CompleteMultipartUploadResult complete( * process. */ void abort() { - String operation = - String.format( - "Aborting multi-part upload for '%s', id '%s", - writeOperationHelper, uploadId); + String operation = String.format("Aborting multi-part upload for '%s', id '%s", writeOperationHelper, + uploadId); try { LOG.debug(operation); writeOperationHelper.abortMultipartUpload(key, uploadId); } catch (ObsException e) { - LOG.warn( - "Unable to abort multipart upload, you may need to purge " - + "uploaded parts", - e); + LOG.warn("Unable to abort multipart upload, you may need to purge " + "uploaded parts", e); } } } diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSCommonUtils.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSCommonUtils.java index cb06a03..ca29671 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSCommonUtils.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSCommonUtils.java @@ -62,14 +62,13 @@ * Common utils for {@link OBSFileSystem}. */ //CHECKSTYLE:OFF -final class OBSCommonUtils { +public final class OBSCommonUtils { //CHECKSTYLE:ON /** * Class logger. */ - private static final Logger LOG = LoggerFactory.getLogger( - OBSCommonUtils.class); + private static final Logger LOG = LoggerFactory.getLogger(OBSCommonUtils.class); /** * Moved permanently response code. @@ -110,13 +109,12 @@ final class OBSCommonUtils { * Core property for provider path. Duplicated here for consistent code * across Hadoop version: {@value}. */ - static final String CREDENTIAL_PROVIDER_PATH - = "hadoop.security.credential.provider.path"; + static final String CREDENTIAL_PROVIDER_PATH = "hadoop.security.credential.provider.path"; /** * Max time in milliseconds to retry when request failed. */ - static final int MAX_TIME_IN_MILLISECONDS_TO_RETRY = 180000; + public static long MAX_TIME_IN_MILLISECONDS_TO_RETRY = 180000; /** * Min time in milliseconds to sleep between retry intervals. @@ -146,6 +144,18 @@ final class OBSCommonUtils { private OBSCommonUtils() { } + /** + * Set the max time in millisecond to retry on error. + * @param maxTime max time in millisecond to set for retry + */ + static void setMaxTimeInMillisecondsToRetry(long maxTime) { + if (maxTime <= 0) { + LOG.warn("Invalid time[{}] to set for retry on error.", maxTime); + maxTime = OBSConstants.DEFAULT_TIME_IN_MILLISECOND_TO_RETRY; + } + MAX_TIME_IN_MILLISECONDS_TO_RETRY = maxTime; + } + /** * Get the fs status of the bucket. * @@ -155,8 +165,7 @@ private OBSCommonUtils() { * @throws FileNotFoundException the bucket is absent * @throws IOException any other problem talking to OBS */ - static boolean getBucketFsStatus(final ObsClient obs, - final String bucketName) + static boolean getBucketFsStatus(final ObsClient obs, final String bucketName) throws FileNotFoundException, IOException { GetBucketFSStatusRequest request = new GetBucketFSStatusRequest(); request.setBucketName(bucketName); @@ -171,25 +180,22 @@ static boolean getBucketFsStatus(final ObsClient obs, * @param request information to get bucket FsStatus * @return boolean value indicating if this bucket is a posix bucket */ - private static FSStatusEnum innerGetBucketFsStatus(final ObsClient obs, - final GetBucketFSStatusRequest request) throws IOException { + private static FSStatusEnum innerGetBucketFsStatus(final ObsClient obs, final GetBucketFSStatusRequest request) + throws IOException { long delayMs; int retryTime = 0; long startTime = System.currentTimeMillis(); GetBucketFSStatusResult getBucketFsStatusResult; - while (System.currentTimeMillis() - startTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + while (System.currentTimeMillis() - startTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { try { getBucketFsStatusResult = obs.getBucketFSStatus(request); return getBucketFsStatusResult.getStatus(); } catch (ObsException e) { - LOG.debug( - "Failed to getBucketFsStatus for [{}], retry time [{}], " - + "exception [{}]", request.getBucketName(), - retryTime, e); + LOG.debug("Failed to getBucketFsStatus for [{}], retry time [{}], " + "exception [{}]", + request.getBucketName(), retryTime, e); - IOException ioException = OBSCommonUtils.translateException( - "getBucketFSStatus", request.getBucketName(), e); + IOException ioException = OBSCommonUtils.translateException("getBucketFSStatus", + request.getBucketName(), e); if (!(ioException instanceof OBSIOException)) { throw ioException; } @@ -207,12 +213,9 @@ private static FSStatusEnum innerGetBucketFsStatus(final ObsClient obs, try { getBucketFsStatusResult = obs.getBucketFSStatus(request); } catch (ObsException e) { - LOG.debug( - "Failed to getBucketFsStatus for [{}], retry time [{}], " - + "exception [{}]", request.getBucketName(), - retryTime, e); - throw OBSCommonUtils.translateException( - "getBucketFSStatus", request.getBucketName(), e); + LOG.debug("Failed to getBucketFsStatus for [{}], retry time [{}], " + "exception [{}]", + request.getBucketName(), retryTime, e); + throw OBSCommonUtils.translateException("getBucketFSStatus", request.getBucketName(), e); } return getBucketFsStatusResult.getStatus(); } @@ -231,29 +234,22 @@ static boolean isValidName(final String src) { } // Check for ".." "." ":" "/" - String[] components = org.apache.hadoop.util.StringUtils.split(src, - '/'); + String[] components = org.apache.hadoop.util.StringUtils.split(src, '/'); for (int i = 0; i < components.length; i++) { String element = components[i]; - if (element.equals(".") - || element.contains(":") - || element.contains("/")) { + if (element.equals(".") || element.contains(":") || element.contains("/")) { return false; } // ".." is allowed in path starting with /.reserved/.inodes if (element.equals("..")) { - if (components.length > 4 - && components[1].equals(".reserved") - && components[2].equals(".inodes")) { + if (components.length > 4 && components[1].equals(".reserved") && components[2].equals(".inodes")) { continue; } return false; } // The string may start or end with a /, but not have // "//" in the middle. - if (element.isEmpty() - && i != components.length - 1 - && i != 0) { + if (element.isEmpty() && i != components.length - 1 && i != 0) { return false; } } @@ -264,8 +260,7 @@ static boolean isValidName(final String src) { * @param flags * @throws UnsupportedOperationException */ - static void checkCreateFlag(final EnumSet flags) - throws UnsupportedOperationException { + static void checkCreateFlag(final EnumSet flags) throws UnsupportedOperationException { if (flags == null) { return; } @@ -273,9 +268,7 @@ static void checkCreateFlag(final EnumSet flags) StringBuilder unsupportedFlags = new StringBuilder(); boolean hasUnSupportedFlag = false; for (CreateFlag flag : flags) { - if (flag != CreateFlag.CREATE - && flag != CreateFlag.APPEND - && flag != CreateFlag.OVERWRITE + if (flag != CreateFlag.CREATE && flag != CreateFlag.APPEND && flag != CreateFlag.OVERWRITE && flag != CreateFlag.SYNC_BLOCK) { unsupportedFlags.append(flag).append(","); hasUnSupportedFlag = true; @@ -284,9 +277,7 @@ static void checkCreateFlag(final EnumSet flags) if (hasUnSupportedFlag) { throw new UnsupportedOperationException( - "create with unsupported flags: " - + unsupportedFlags.substring(0, - unsupportedFlags.lastIndexOf(","))); + "create with unsupported flags: " + unsupportedFlags.substring(0, unsupportedFlags.lastIndexOf(","))); } } @@ -303,9 +294,7 @@ static String pathToKey(final OBSFileSystem owner, final Path path) { absolutePath = new Path(owner.getWorkingDirectory(), path); } - if (absolutePath.toUri().getScheme() != null && absolutePath.toUri() - .getPath() - .isEmpty()) { + if (absolutePath.toUri().getScheme() != null && absolutePath.toUri().getPath().isEmpty()) { return ""; } @@ -345,8 +334,7 @@ static Path keyToPath(final String key) { * @param key input key * @return the fully qualified path including URI scheme and bucket name. */ - static Path keyToQualifiedPath(final OBSFileSystem owner, - final String key) { + static Path keyToQualifiedPath(final OBSFileSystem owner, final String key) { return qualify(owner, keyToPath(key)); } @@ -368,8 +356,7 @@ static Path qualify(final OBSFileSystem owner, final Path path) { * @return new key */ static String maybeDeleteBeginningSlash(final String key) { - return !StringUtils.isEmpty(key) && key.startsWith("/") ? key.substring( - 1) : key; + return !StringUtils.isEmpty(key) && key.startsWith("/") ? key.substring(1) : key; } /** @@ -379,9 +366,7 @@ static String maybeDeleteBeginningSlash(final String key) { * @return new key */ static String maybeAddBeginningSlash(final String key) { - return !StringUtils.isEmpty(key) && !key.startsWith("/") - ? "/" + key - : key; + return !StringUtils.isEmpty(key) && !key.startsWith("/") ? "/" + key : key; } /** @@ -394,28 +379,21 @@ static String maybeAddBeginningSlash(final String key) { * @param exception obs exception raised * @return an IOE which wraps the caught exception. */ - static IOException translateException( - final String operation, final String path, + public static IOException translateException(final String operation, final String path, final ObsException exception) { - String message = String.format("%s%s: status [%d] - request id [%s] " - + "- error code [%s] - error message [%s] - trace :%s ", - operation, path != null ? " on " + path : "", - exception.getResponseCode(), exception.getErrorRequestId(), - exception.getErrorCode(), - exception.getErrorMessage(), exception); + String message = String.format( + "%s%s: status [%d] - request id [%s] " + "- error code [%s] - error message [%s] - trace :%s ", operation, + path != null ? " on " + path : "", exception.getResponseCode(), exception.getErrorRequestId(), + exception.getErrorCode(), exception.getErrorMessage(), exception); IOException ioe; int status = exception.getResponseCode(); switch (status) { case MOVED_PERMANENTLY_CODE: - message = - String.format("Received permanent redirect response, " - + "status [%d] - request id [%s] - " - + "error code [%s] - message [%s]", - exception.getResponseCode(), - exception.getErrorRequestId(), exception.getErrorCode(), - exception.getErrorMessage()); + message = String.format("Received permanent redirect response, " + "status [%d] - request id [%s] - " + + "error code [%s] - message [%s]", exception.getResponseCode(), exception.getErrorRequestId(), + exception.getErrorCode(), exception.getErrorMessage()); ioe = new OBSIOException(message, exception); break; // permissions @@ -463,11 +441,9 @@ static IOException translateException( * mistaken attempt to delete the root * directory. */ - static void blockRootDelete(final String bucket, final String key) - throws InvalidRequestException { + static void blockRootDelete(final String bucket, final String key) throws InvalidRequestException { if (key.isEmpty() || "/".equals(key)) { - throw new InvalidRequestException( - "Bucket " + bucket + " cannot be deleted"); + throw new InvalidRequestException("Bucket " + bucket + " cannot be deleted"); } } @@ -479,16 +455,13 @@ static void blockRootDelete(final String bucket, final String key) * @param key key to blob to delete. * @throws IOException on any failure to delete object */ - static void deleteObject(final OBSFileSystem owner, final String key) - throws IOException { + static void deleteObject(final OBSFileSystem owner, final String key) throws IOException { blockRootDelete(owner.getBucket(), key); try { innerDeleteObject(owner, key); return; } catch (ObsException e) { - LOG.error( - "Failed to deleteObject for [{}], exception [{}]", - key, translateException("delete", key, e)); + LOG.error("Failed to deleteObject for [{}], exception [{}]", key, translateException("delete", key, e)); throw translateException("delete", key, e); } } @@ -500,26 +473,21 @@ static void deleteObject(final OBSFileSystem owner, final String key) * @param owner the owner OBSFileSystem instance. * @param key key to blob to delete. */ - private static void innerDeleteObject(final OBSFileSystem owner, - final String key) throws IOException { + private static void innerDeleteObject(final OBSFileSystem owner, final String key) throws IOException { long delayMs; int retryTime = 0; long startTime = System.currentTimeMillis(); - while (System.currentTimeMillis() - startTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + while (System.currentTimeMillis() - startTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { try { owner.getObsClient().deleteObject(owner.getBucket(), key); owner.getSchemeStatistics().incrementWriteOps(1); return; } catch (ObsException e) { - LOG.debug("Delete path failed with [{}], " - + "retry time [{}] - request id [{}] - " - + "error code [{}] - error message [{}]", - e.getResponseCode(), retryTime, e.getErrorRequestId(), + LOG.debug("Delete path failed with [{}], " + "retry time [{}] - request id [{}] - " + + "error code [{}] - error message [{}]", e.getResponseCode(), retryTime, e.getErrorRequestId(), e.getErrorCode(), e.getErrorMessage()); - IOException ioException = OBSCommonUtils.translateException( - "innerDeleteObject", key, e); + IOException ioException = OBSCommonUtils.translateException("innerDeleteObject", key, e); if (!(ioException instanceof OBSIOException)) { throw ioException; } @@ -546,20 +514,16 @@ private static void innerDeleteObject(final OBSFileSystem owner, * @param deleteRequest keys to delete on the obs-backend * @throws IOException on any failure to delete objects */ - static void deleteObjects(final OBSFileSystem owner, - final DeleteObjectsRequest deleteRequest) throws IOException { + static void deleteObjects(final OBSFileSystem owner, final DeleteObjectsRequest deleteRequest) throws IOException { DeleteObjectsResult result; deleteRequest.setQuiet(true); try { result = owner.getObsClient().deleteObjects(deleteRequest); owner.getSchemeStatistics().incrementWriteOps(1); } catch (ObsException e) { - LOG.warn("delete objects failed, request [{}], request id [{}] - " - + "error code [{}] - error message [{}]", - deleteRequest, e.getErrorRequestId(), e.getErrorCode(), - e.getErrorMessage()); - for (KeyAndVersion keyAndVersion - : deleteRequest.getKeyAndVersionsList()) { + LOG.warn("delete objects failed, request [{}], request id [{}] - " + "error code [{}] - error message [{}]", + deleteRequest, e.getErrorRequestId(), e.getErrorCode(), e.getErrorMessage()); + for (KeyAndVersion keyAndVersion : deleteRequest.getKeyAndVersionsList()) { deleteObject(owner, keyAndVersion.getKey()); } return; @@ -567,15 +531,11 @@ static void deleteObjects(final OBSFileSystem owner, // delete one by one if there is errors if (result != null) { - List errorResults - = result.getErrorResults(); + List errorResults = result.getErrorResults(); if (!errorResults.isEmpty()) { - LOG.warn("bulk delete {} objects, {} failed, begin to delete " - + "one by one.", - deleteRequest.getKeyAndVersionsList().size(), - errorResults.size()); - for (DeleteObjectsResult.ErrorResult errorResult - : errorResults) { + LOG.warn("bulk delete {} objects, {} failed, begin to delete " + "one by one.", + deleteRequest.getKeyAndVersionsList().size(), errorResults.size()); + for (DeleteObjectsResult.ErrorResult errorResult : errorResults) { deleteObject(owner, errorResult.getObjectKey()); } } @@ -591,11 +551,10 @@ static void deleteObjects(final OBSFileSystem owner, * @param srcfile source file * @return the request */ - static PutObjectRequest newPutObjectRequest(final OBSFileSystem owner, - final String key, final ObjectMetadata metadata, final File srcfile) { + static PutObjectRequest newPutObjectRequest(final OBSFileSystem owner, final String key, + final ObjectMetadata metadata, final File srcfile) { Preconditions.checkNotNull(srcfile); - PutObjectRequest putObjectRequest = new PutObjectRequest( - owner.getBucket(), key, srcfile); + PutObjectRequest putObjectRequest = new PutObjectRequest(owner.getBucket(), key, srcfile); putObjectRequest.setAcl(owner.getCannedACL()); putObjectRequest.setMetadata(metadata); if (owner.getSse().isSseCEnable()) { @@ -616,12 +575,10 @@ static PutObjectRequest newPutObjectRequest(final OBSFileSystem owner, * @param inputStream source data. * @return the request */ - static PutObjectRequest newPutObjectRequest(final OBSFileSystem owner, - final String key, final ObjectMetadata metadata, - final InputStream inputStream) { + static PutObjectRequest newPutObjectRequest(final OBSFileSystem owner, final String key, + final ObjectMetadata metadata, final InputStream inputStream) { Preconditions.checkNotNull(inputStream); - PutObjectRequest putObjectRequest = new PutObjectRequest( - owner.getBucket(), key, inputStream); + PutObjectRequest putObjectRequest = new PutObjectRequest(owner.getBucket(), key, inputStream); putObjectRequest.setAcl(owner.getCannedACL()); putObjectRequest.setMetadata(metadata); if (owner.getSse().isSseCEnable()) { @@ -643,8 +600,8 @@ static PutObjectRequest newPutObjectRequest(final OBSFileSystem owner, * @return the upload initiated * @throws ObsException on problems */ - static PutObjectResult putObjectDirect(final OBSFileSystem owner, - final PutObjectRequest putObjectRequest) throws ObsException { + static PutObjectResult putObjectDirect(final OBSFileSystem owner, final PutObjectRequest putObjectRequest) + throws ObsException { long len; if (putObjectRequest.getFile() != null) { len = putObjectRequest.getFile().length(); @@ -652,8 +609,7 @@ static PutObjectResult putObjectDirect(final OBSFileSystem owner, len = putObjectRequest.getMetadata().getContentLength(); } - PutObjectResult result = owner.getObsClient() - .putObject(putObjectRequest); + PutObjectResult result = owner.getObsClient().putObject(putObjectRequest); owner.getSchemeStatistics().incrementWriteOps(1); owner.getSchemeStatistics().incrementBytesWritten(len); return result; @@ -669,18 +625,15 @@ static PutObjectResult putObjectDirect(final OBSFileSystem owner, * @return the result of the operation. * @throws ObsException on problems */ - static UploadPartResult uploadPart(final OBSFileSystem owner, - final UploadPartRequest request) throws ObsException { + static UploadPartResult uploadPart(final OBSFileSystem owner, final UploadPartRequest request) throws ObsException { long len = request.getPartSize(); - UploadPartResult uploadPartResult = owner.getObsClient() - .uploadPart(request); + UploadPartResult uploadPartResult = owner.getObsClient().uploadPart(request); owner.getSchemeStatistics().incrementWriteOps(1); owner.getSchemeStatistics().incrementBytesWritten(len); return uploadPartResult; } - static void removeKeys(final OBSFileSystem owner, - final List keysToDelete, final boolean clearKeys, + static void removeKeys(final OBSFileSystem owner, final List keysToDelete, final boolean clearKeys, final boolean checkRootDelete) throws IOException { if (keysToDelete.isEmpty()) { // exit fast if there are no keys to delete @@ -693,23 +646,19 @@ static void removeKeys(final OBSFileSystem owner, } } - if (!owner.isEnableMultiObjectDelete() - || keysToDelete.size() < owner.getMultiDeleteThreshold()) { + if (!owner.isEnableMultiObjectDelete() || keysToDelete.size() < owner.getMultiDeleteThreshold()) { // delete one by one. for (KeyAndVersion keyVersion : keysToDelete) { deleteObject(owner, keyVersion.getKey()); } } else if (keysToDelete.size() <= owner.getMaxEntriesToDelete()) { // Only one batch. - DeleteObjectsRequest deleteObjectsRequest - = new DeleteObjectsRequest(owner.getBucket()); - deleteObjectsRequest.setKeyAndVersions( - keysToDelete.toArray(new KeyAndVersion[0])); + DeleteObjectsRequest deleteObjectsRequest = new DeleteObjectsRequest(owner.getBucket()); + deleteObjectsRequest.setKeyAndVersions(keysToDelete.toArray(new KeyAndVersion[0])); deleteObjects(owner, deleteObjectsRequest); } else { // Multi batches. - List keys = new ArrayList<>( - owner.getMaxEntriesToDelete()); + List keys = new ArrayList<>(owner.getMaxEntriesToDelete()); for (KeyAndVersion key : keysToDelete) { keys.add(key); if (keys.size() == owner.getMaxEntriesToDelete()) { @@ -738,8 +687,7 @@ static void removeKeys(final OBSFileSystem owner, * @param exception obs exception raised * @return an IOE which wraps the caught exception. */ - static IOException translateException(final String operation, - final Path path, final ObsException exception) { + static IOException translateException(final String operation, final Path path, final ObsException exception) { return translateException(operation, path.toString(), exception); } @@ -755,8 +703,7 @@ static IOException translateException(final String operation, * @throws IOException due to an IO problem. * @throws ObsException on failures inside the OBS SDK */ - static FileStatus[] innerListStatus(final OBSFileSystem owner, final Path f, - final boolean recursive) + static FileStatus[] innerListStatus(final OBSFileSystem owner, final Path f, final boolean recursive) throws FileNotFoundException, IOException, ObsException { Path path = qualify(owner, f); String key = pathToKey(owner, path); @@ -772,15 +719,11 @@ static FileStatus[] innerListStatus(final OBSFileSystem owner, final Path f, if (fileStatus.isDirectory()) { key = maybeAddTrailingSlash(key); String delimiter = recursive ? null : "/"; - ListObjectsRequest request = createListObjectsRequest(owner, key, - delimiter); - LOG.debug( - "listStatus: doing listObjects for directory {} - recursive {}", - f, recursive); + ListObjectsRequest request = createListObjectsRequest(owner, key, delimiter); + LOG.debug("listStatus: doing listObjects for directory {} - recursive {}", f, recursive); OBSListing.FileStatusListingIterator files = owner.getObsListing() - .createFileStatusListingIterator( - path, request, OBSListing.ACCEPT_ALL, + .createFileStatusListingIterator(path, request, OBSListing.ACCEPT_ALL, new OBSListing.AcceptAllButSelfAndS3nDirs(path)); result = new ArrayList<>(files.getBatchSize()); while (files.hasNext()) { @@ -804,14 +747,13 @@ static FileStatus[] innerListStatus(final OBSFileSystem owner, final Path f, * @param delimiter any delimiter * @return the request */ - static ListObjectsRequest createListObjectsRequest( - final OBSFileSystem owner, final String key, final String delimiter) { + static ListObjectsRequest createListObjectsRequest(final OBSFileSystem owner, final String key, + final String delimiter) { return createListObjectsRequest(owner, key, delimiter, -1); } - static ListObjectsRequest createListObjectsRequest( - final OBSFileSystem owner, final String key, final String delimiter, - final int maxKeyNum) { + static ListObjectsRequest createListObjectsRequest(final OBSFileSystem owner, final String key, + final String delimiter, final int maxKeyNum) { ListObjectsRequest request = new ListObjectsRequest(); request.setBucketName(owner.getBucket()); if (maxKeyNum > 0 && maxKeyNum < owner.getMaxKeys()) { @@ -840,9 +782,7 @@ static ListObjectsRequest createListObjectsRequest( * @throws PathIsNotEmptyDirectoryException if the operation was explicitly * rejected. */ - static boolean rejectRootDirectoryDelete(final String bucket, - final boolean isEmptyDir, - final boolean recursive) + static boolean rejectRootDirectoryDelete(final String bucket, final boolean isEmptyDir, final boolean recursive) throws PathIsNotEmptyDirectoryException { LOG.info("obs delete the {} root directory of {}", bucket, recursive); if (isEmptyDir) { @@ -888,8 +828,7 @@ static boolean innerMkdirs(final OBSFileSystem owner, final Path path) } if (fileStatus.isFile()) { throw new FileAlreadyExistsException( - String.format("Can't make directory for path '%s'" - + " since it is a file.", fPart)); + String.format("Can't make directory for path '%s'" + " since it is a file.", fPart)); } } catch (FileNotFoundException fnfe) { LOG.debug("file {} not fount, but ignore.", path); @@ -920,33 +859,30 @@ static boolean innerMkdirs(final OBSFileSystem owner, final Path path) * @return the results * @throws IOException on any failure to list objects */ - static ObjectListing listObjects(final OBSFileSystem owner, - final ListObjectsRequest request) throws IOException { - if (request.getDelimiter() == null && request.getMarker() == null - && owner.isFsBucket() && owner.isObsClientDFSListEnable()) { + static ObjectListing listObjects(final OBSFileSystem owner, final ListObjectsRequest request) throws IOException { + if (request.getDelimiter() == null && request.getMarker() == null && owner.isFsBucket() + && owner.isObsClientDFSListEnable()) { return OBSFsDFSListing.fsDFSListObjects(owner, request); } return commonListObjects(owner, request); } - static ObjectListing commonListObjects(final OBSFileSystem owner, - final ListObjectsRequest request) throws IOException { + static ObjectListing commonListObjects(final OBSFileSystem owner, final ListObjectsRequest request) + throws IOException { int retryTime = 0; long delayMs; long startTime = System.currentTimeMillis(); - while (System.currentTimeMillis() - startTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + while (System.currentTimeMillis() - startTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { try { owner.getSchemeStatistics().incrementReadOps(1); return owner.getObsClient().listObjects(request); } catch (ObsException e) { - LOG.debug("Failed to commonListObjects for request[{}], retry " - + "time [{}], due to exception[{}]", + LOG.debug("Failed to commonListObjects for request[{}], retry " + "time [{}], due to exception[{}]", request, retryTime, e); - IOException ioException = OBSCommonUtils.translateException( - "listObjects (" + request + ")", request.getPrefix(), e); + IOException ioException = OBSCommonUtils.translateException("listObjects (" + request + ")", + request.getPrefix(), e); if (!(ioException instanceof OBSIOException)) { throw ioException; @@ -957,8 +893,7 @@ static ObjectListing commonListObjects(final OBSFileSystem owner, try { Thread.sleep(delayMs); } catch (InterruptedException ie) { - LOG.error("Failed to commonListObjects for request[{}], " - + "retry time [{}], due to exception[{}]", + LOG.error("Failed to commonListObjects for request[{}], " + "retry time [{}], due to exception[{}]", request, retryTime, ioException); throw ioException; } @@ -977,19 +912,16 @@ static ObjectListing commonListObjects(final OBSFileSystem owner, * @return the next result object * @throws IOException on any failure to list the next set of objects */ - static ObjectListing continueListObjects(final OBSFileSystem owner, - final ObjectListing objects) throws IOException { - if (objects.getDelimiter() == null && owner.isFsBucket() - && owner.isObsClientDFSListEnable()) { - return OBSFsDFSListing.fsDFSContinueListObjects(owner, - (OBSFsDFSListing) objects); + static ObjectListing continueListObjects(final OBSFileSystem owner, final ObjectListing objects) + throws IOException { + if (objects.getDelimiter() == null && owner.isFsBucket() && owner.isObsClientDFSListEnable()) { + return OBSFsDFSListing.fsDFSContinueListObjects(owner, (OBSFsDFSListing) objects); } return commonContinueListObjects(owner, objects); } - private static ObjectListing commonContinueListObjects( - final OBSFileSystem owner, final ObjectListing objects) + private static ObjectListing commonContinueListObjects(final OBSFileSystem owner, final ObjectListing objects) throws IOException { String delimiter = objects.getDelimiter(); int maxKeyNum = objects.getMaxKeys(); @@ -1008,23 +940,21 @@ private static ObjectListing commonContinueListObjects( return commonContinueListObjects(owner, request); } - static ObjectListing commonContinueListObjects(final OBSFileSystem owner, - final ListObjectsRequest request) throws IOException { + static ObjectListing commonContinueListObjects(final OBSFileSystem owner, final ListObjectsRequest request) + throws IOException { long delayMs; int retryTime = 0; long startTime = System.currentTimeMillis(); - while (System.currentTimeMillis() - startTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + while (System.currentTimeMillis() - startTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { try { owner.getSchemeStatistics().incrementReadOps(1); return owner.getObsClient().listObjects(request); } catch (ObsException e) { - LOG.debug("Continue list objects failed for request[{}], retry" - + " time[{}], due to exception[{}]", + LOG.debug("Continue list objects failed for request[{}], retry" + " time[{}], due to exception[{}]", request, retryTime, e); - IOException ioException = OBSCommonUtils.translateException( - "listObjects (" + request + ")", request.getPrefix(), e); + IOException ioException = OBSCommonUtils.translateException("listObjects (" + request + ")", + request.getPrefix(), e); if (!(ioException instanceof OBSIOException)) { throw ioException; @@ -1035,8 +965,7 @@ static ObjectListing commonContinueListObjects(final OBSFileSystem owner, try { Thread.sleep(delayMs); } catch (InterruptedException ie) { - LOG.error("Continue list objects failed for request[{}], " - + "retry time[{}], due to exception[{}]", + LOG.error("Continue list objects failed for request[{}], " + "retry time[{}], due to exception[{}]", request, retryTime, ioException); throw ioException; } @@ -1054,10 +983,8 @@ static ObjectListing commonContinueListObjects(final OBSFileSystem owner, * @param size object size * @return true if it meets the criteria for being an object */ - public static boolean objectRepresentsDirectory(final String name, - final long size) { - return !name.isEmpty() && name.charAt(name.length() - 1) == '/' - && size == 0L; + public static boolean objectRepresentsDirectory(final String name, final long size) { + return !name.isEmpty() && name.charAt(name.length() - 1) == '/' && size == 0L; } /** @@ -1072,27 +999,22 @@ public static long dateToLong(final Date date) { return 0L; } - return date.getTime() / OBSConstants.SEC2MILLISEC_FACTOR - * OBSConstants.SEC2MILLISEC_FACTOR; + return date.getTime() / OBSConstants.SEC2MILLISEC_FACTOR * OBSConstants.SEC2MILLISEC_FACTOR; } // Used to check if a folder is empty or not. - static boolean isFolderEmpty(final OBSFileSystem owner, final String key) - throws IOException { + static boolean isFolderEmpty(final OBSFileSystem owner, final String key) throws IOException { long delayMs; int retryTime = 0; long startTime = System.currentTimeMillis(); - while (System.currentTimeMillis() - startTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + while (System.currentTimeMillis() - startTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { try { return innerIsFolderEmpty(owner, key); } catch (ObsException e) { - LOG.debug( - "Failed to check empty folder for [{}], retry time [{}], " - + "exception [{}]", key, retryTime, e); + LOG.debug("Failed to check empty folder for [{}], retry time [{}], " + "exception [{}]", key, retryTime, + e); - IOException ioException = OBSCommonUtils.translateException( - "innerIsFolderEmpty", key, e); + IOException ioException = OBSCommonUtils.translateException("innerIsFolderEmpty", key, e); if (!(ioException instanceof OBSIOException)) { throw ioException; @@ -1111,22 +1033,17 @@ static boolean isFolderEmpty(final OBSFileSystem owner, final String key) try { return innerIsFolderEmpty(owner, key); } catch (ObsException e) { - throw OBSCommonUtils.translateException( - "innerIsFolderEmpty", key, e); + throw OBSCommonUtils.translateException("innerIsFolderEmpty", key, e); } } // Used to check if a folder is empty or not by counting the number of // sub objects in list. - private static boolean isFolderEmpty(final String key, - final ObjectListing objects) { + private static boolean isFolderEmpty(final String key, final ObjectListing objects) { int count = objects.getObjects().size(); if (count >= 2) { return false; - } else if (count == 1 && !objects.getObjects() - .get(0) - .getObjectKey() - .equals(key)) { + } else if (count == 1 && !objects.getObjects().get(0).getObjectKey().equals(key)) { return false; } @@ -1139,8 +1056,7 @@ private static boolean isFolderEmpty(final String key, } // Used to check if a folder is empty or not. - static boolean innerIsFolderEmpty(final OBSFileSystem owner, - final String key) + static boolean innerIsFolderEmpty(final OBSFileSystem owner, final String key) throws FileNotFoundException, ObsException { String obsKey = maybeAddTrailingSlash(key); ListObjectsRequest request = new ListObjectsRequest(); @@ -1151,20 +1067,17 @@ static boolean innerIsFolderEmpty(final OBSFileSystem owner, owner.getSchemeStatistics().incrementReadOps(1); ObjectListing objects = owner.getObsClient().listObjects(request); - if (!objects.getCommonPrefixes().isEmpty() || !objects.getObjects() - .isEmpty()) { + if (!objects.getCommonPrefixes().isEmpty() || !objects.getObjects().isEmpty()) { if (isFolderEmpty(obsKey, objects)) { LOG.debug("Found empty directory {}", obsKey); return true; } if (LOG.isDebugEnabled()) { - LOG.debug("Found path as directory (with /): {}/{}", - objects.getCommonPrefixes().size(), + LOG.debug("Found path as directory (with /): {}/{}", objects.getCommonPrefixes().size(), objects.getObjects().size()); for (ObsObject summary : objects.getObjects()) { - LOG.debug("Summary: {} {}", summary.getObjectKey(), - summary.getMetadata().getContentLength()); + LOG.debug("Summary: {} {}", summary.getObjectKey(), summary.getMetadata().getContentLength()); } for (String prefix : objects.getCommonPrefixes()) { LOG.debug("Prefix: {}", prefix); @@ -1192,11 +1105,10 @@ static boolean innerIsFolderEmpty(final OBSFileSystem owner, * @return a located status with block locations set up from this FS. * @throws IOException IO Problems. */ - static LocatedFileStatus toLocatedFileStatus(final OBSFileSystem owner, - final FileStatus status) throws IOException { - return new LocatedFileStatus( - status, status.isFile() ? owner.getFileBlockLocations(status, 0, - status.getLen()) : null); + static LocatedFileStatus toLocatedFileStatus(final OBSFileSystem owner, final FileStatus status) + throws IOException { + return new LocatedFileStatus(status, + status.isFile() ? owner.getFileBlockLocations(status, 0, status.getLen()) : null); } /** @@ -1209,29 +1121,24 @@ static LocatedFileStatus toLocatedFileStatus(final OBSFileSystem owner, * @return the request * @throws IOException any problem */ - static WriteFileRequest newAppendFileRequest(final OBSFileSystem owner, - final String key, final long recordPosition, final File tmpFile) - throws IOException { + static WriteFileRequest newAppendFileRequest(final OBSFileSystem owner, final String key, final long recordPosition, + final File tmpFile) throws IOException { Preconditions.checkNotNull(key); Preconditions.checkNotNull(tmpFile); ObsFSAttribute obsFsAttribute; try { - GetAttributeRequest getAttributeReq = new GetAttributeRequest( - owner.getBucket(), key); + GetAttributeRequest getAttributeReq = new GetAttributeRequest(owner.getBucket(), key); obsFsAttribute = owner.getObsClient().getAttribute(getAttributeReq); } catch (ObsException e) { throw translateException("GetAttributeRequest", key, e); } - long appendPosition = Math.max(recordPosition, - obsFsAttribute.getContentLength()); + long appendPosition = Math.max(recordPosition, obsFsAttribute.getContentLength()); if (recordPosition != obsFsAttribute.getContentLength()) { - LOG.warn("append url[{}] position[{}], file contentLength[{}] not" - + " equal to recordPosition[{}].", key, appendPosition, - obsFsAttribute.getContentLength(), recordPosition); + LOG.warn("append url[{}] position[{}], file contentLength[{}] not" + " equal to recordPosition[{}].", key, + appendPosition, obsFsAttribute.getContentLength(), recordPosition); } - WriteFileRequest writeFileReq = new WriteFileRequest(owner.getBucket(), - key, tmpFile, appendPosition); + WriteFileRequest writeFileReq = new WriteFileRequest(owner.getBucket(), key, tmpFile, appendPosition); writeFileReq.setAcl(owner.getCannedACL()); return writeFileReq; } @@ -1246,29 +1153,24 @@ static WriteFileRequest newAppendFileRequest(final OBSFileSystem owner, * @return the request * @throws IOException any problem */ - static WriteFileRequest newAppendFileRequest(final OBSFileSystem owner, - final String key, final long recordPosition, + static WriteFileRequest newAppendFileRequest(final OBSFileSystem owner, final String key, final long recordPosition, final InputStream inputStream) throws IOException { Preconditions.checkNotNull(key); Preconditions.checkNotNull(inputStream); ObsFSAttribute obsFsAttribute; try { - GetAttributeRequest getAttributeReq = new GetAttributeRequest( - owner.getBucket(), key); + GetAttributeRequest getAttributeReq = new GetAttributeRequest(owner.getBucket(), key); obsFsAttribute = owner.getObsClient().getAttribute(getAttributeReq); } catch (ObsException e) { throw translateException("GetAttributeRequest", key, e); } - long appendPosition = Math.max(recordPosition, - obsFsAttribute.getContentLength()); + long appendPosition = Math.max(recordPosition, obsFsAttribute.getContentLength()); if (recordPosition != obsFsAttribute.getContentLength()) { - LOG.warn("append url[{}] position[{}], file contentLength[{}] not" - + " equal to recordPosition[{}].", key, appendPosition, - obsFsAttribute.getContentLength(), recordPosition); + LOG.warn("append url[{}] position[{}], file contentLength[{}] not" + " equal to recordPosition[{}].", key, + appendPosition, obsFsAttribute.getContentLength(), recordPosition); } - WriteFileRequest writeFileReq = new WriteFileRequest(owner.getBucket(), - key, inputStream, appendPosition); + WriteFileRequest writeFileReq = new WriteFileRequest(owner.getBucket(), key, inputStream, appendPosition); writeFileReq.setAcl(owner.getCannedACL()); return writeFileReq; } @@ -1280,24 +1182,20 @@ static WriteFileRequest newAppendFileRequest(final OBSFileSystem owner, * @param appendFileRequest append object request * @throws IOException on any failure to append file */ - static void appendFile(final OBSFileSystem owner, - final WriteFileRequest appendFileRequest) throws IOException { + static void appendFile(final OBSFileSystem owner, final WriteFileRequest appendFileRequest) throws IOException { long len = 0; if (appendFileRequest.getFile() != null) { len = appendFileRequest.getFile().length(); } try { - LOG.debug("Append file, key {} position {} size {}", - appendFileRequest.getObjectKey(), - appendFileRequest.getPosition(), - len); + LOG.debug("Append file, key {} position {} size {}", appendFileRequest.getObjectKey(), + appendFileRequest.getPosition(), len); owner.getObsClient().writeFile(appendFileRequest); owner.getSchemeStatistics().incrementWriteOps(1); owner.getSchemeStatistics().incrementBytesWritten(len); } catch (ObsException e) { - throw translateException("AppendFile", - appendFileRequest.getObjectKey(), e); + throw translateException("AppendFile", appendFileRequest.getObjectKey(), e); } } @@ -1332,8 +1230,7 @@ static void closeAll(final java.io.Closeable... closeables) { * @param ee execution exception * @return an IOE which can be thrown */ - static IOException extractException(final String operation, - final String path, final ExecutionException ee) { + static IOException extractException(final String operation, final String path, final ExecutionException ee) { IOException ioe; Throwable cause = ee.getCause(); if (cause instanceof ObsException) { @@ -1355,24 +1252,16 @@ static IOException extractException(final String operation, * @param owner owner of the file * @return a status entry */ - static OBSFileStatus createFileStatus( - final Path keyPath, final ObsObject summary, final long blockSize, + static OBSFileStatus createFileStatus(final Path keyPath, final ObsObject summary, final long blockSize, final String owner) { - if (objectRepresentsDirectory( - summary.getObjectKey(), summary.getMetadata().getContentLength())) { - long lastModified = - summary.getMetadata().getLastModified() == null - ? System.currentTimeMillis() - : OBSCommonUtils.dateToLong( - summary.getMetadata().getLastModified()); + if (objectRepresentsDirectory(summary.getObjectKey(), summary.getMetadata().getContentLength())) { + long lastModified = summary.getMetadata().getLastModified() == null + ? System.currentTimeMillis() + : OBSCommonUtils.dateToLong(summary.getMetadata().getLastModified()); return new OBSFileStatus(keyPath, lastModified, owner); } else { - return new OBSFileStatus( - summary.getMetadata().getContentLength(), - dateToLong(summary.getMetadata().getLastModified()), - keyPath, - blockSize, - owner); + return new OBSFileStatus(summary.getMetadata().getContentLength(), + dateToLong(summary.getMetadata().getLastModified()), keyPath, blockSize, owner); } } @@ -1386,20 +1275,12 @@ static OBSFileStatus createFileStatus( * @return OBSAccessKeys * @throws IOException problems retrieving passwords from KMS. */ - static OBSLoginHelper.Login getOBSAccessKeys(final URI name, - final Configuration conf) - throws IOException { - OBSLoginHelper.Login login - = OBSLoginHelper.extractLoginDetailsWithWarnings(name); - Configuration c = - ProviderUtils.excludeIncompatibleCredentialProviders(conf, - OBSFileSystem.class); - String accessKey = getPassword(c, OBSConstants.ACCESS_KEY, - login.getUser()); - String secretKey = getPassword(c, OBSConstants.SECRET_KEY, - login.getPassword()); - String sessionToken = getPassword(c, OBSConstants.SESSION_TOKEN, - login.getToken()); + static OBSLoginHelper.Login getOBSAccessKeys(final URI name, final Configuration conf) throws IOException { + OBSLoginHelper.Login login = OBSLoginHelper.extractLoginDetailsWithWarnings(name); + Configuration c = ProviderUtils.excludeIncompatibleCredentialProviders(conf, OBSFileSystem.class); + String accessKey = getPassword(c, OBSConstants.ACCESS_KEY, login.getUser()); + String secretKey = getPassword(c, OBSConstants.SECRET_KEY, login.getPassword()); + String sessionToken = getPassword(c, OBSConstants.SESSION_TOKEN, login.getToken()); return new OBSLoginHelper.Login(accessKey, secretKey, sessionToken); } @@ -1414,8 +1295,7 @@ static OBSLoginHelper.Login getOBSAccessKeys(final URI name, * @return a password or "". * @throws IOException on any problem */ - private static String getPassword(final Configuration conf, - final String key, final String val) throws IOException { + private static String getPassword(final Configuration conf, final String key, final String val) throws IOException { return StringUtils.isEmpty(val) ? lookupPassword(conf, key) : val; } @@ -1427,8 +1307,7 @@ private static String getPassword(final Configuration conf, * @return a password or the value in {@code defVal} * @throws IOException on any problem */ - private static String lookupPassword(final Configuration conf, - final String key) throws IOException { + private static String lookupPassword(final Configuration conf, final String key) throws IOException { try { final char[] pass = conf.getPassword(key); return pass != null ? new String(pass).trim() : ""; @@ -1444,8 +1323,7 @@ private static String lookupPassword(final Configuration conf, * @return string value */ static String stringify(final ObsObject summary) { - return summary.getObjectKey() + " size=" + summary.getMetadata() - .getContentLength(); + return summary.getObjectKey() + " size=" + summary.getMetadata().getContentLength(); } /** @@ -1458,14 +1336,10 @@ static String stringify(final ObsObject summary) { * @return the value * @throws IllegalArgumentException if the value is below the minimum */ - static int intOption(final Configuration conf, final String key, - final int defVal, - final int min) { + public static int intOption(final Configuration conf, final String key, final int defVal, final int min) { int v = conf.getInt(key, defVal); - Preconditions.checkArgument( - v >= min, - String.format("Value of %s: %d is below the minimum value %d", key, - v, min)); + Preconditions.checkArgument(v >= min, + String.format("Value of %s: %d is below the minimum value %d", key, v, min)); LOG.debug("Value of {} is {}", key, v); return v; } @@ -1480,14 +1354,10 @@ static int intOption(final Configuration conf, final String key, * @return the value * @throws IllegalArgumentException if the value is below the minimum */ - static long longOption(final Configuration conf, final String key, - final long defVal, - final long min) { + static long longOption(final Configuration conf, final String key, final long defVal, final long min) { long v = conf.getLong(key, defVal); - Preconditions.checkArgument( - v >= min, - String.format("Value of %s: %d is below the minimum value %d", key, - v, min)); + Preconditions.checkArgument(v >= min, + String.format("Value of %s: %d is below the minimum value %d", key, v, min)); LOG.debug("Value of {} is {}", key, v); return v; } @@ -1503,14 +1373,10 @@ static long longOption(final Configuration conf, final String key, * @return the value * @throws IllegalArgumentException if the value is below the minimum */ - static long longBytesOption(final Configuration conf, final String key, - final long defVal, - final long min) { + public static long longBytesOption(final Configuration conf, final String key, final long defVal, final long min) { long v = conf.getLongBytes(key, defVal); - Preconditions.checkArgument( - v >= min, - String.format("Value of %s: %d is below the minimum value %d", key, - v, min)); + Preconditions.checkArgument(v >= min, + String.format("Value of %s: %d is below the minimum value %d", key, v, min)); LOG.debug("Value of {} is {}", key, v); return v; } @@ -1525,12 +1391,10 @@ static long longBytesOption(final Configuration conf, final String key, * @param defVal default value * @return the value, guaranteed to be above the minimum size */ - public static long getMultipartSizeProperty(final Configuration conf, - final String property, final long defVal) { + public static long getMultipartSizeProperty(final Configuration conf, final String property, final long defVal) { long partSize = conf.getLongBytes(property, defVal); if (partSize < OBSConstants.MULTIPART_MIN_SIZE) { - LOG.warn("{} must be at least 5 MB; configured value is {}", - property, partSize); + LOG.warn("{} must be at least 5 MB; configured value is {}", property, partSize); partSize = OBSConstants.MULTIPART_MIN_SIZE; } return partSize; @@ -1544,13 +1408,9 @@ public static long getMultipartSizeProperty(final Configuration conf, * @return the size, guaranteed to be less than or equal to the max value of * an integer. */ - static int ensureOutputParameterInRange(final String name, - final long size) { + static int ensureOutputParameterInRange(final String name, final long size) { if (size > Integer.MAX_VALUE) { - LOG.warn( - "obs: {} capped to ~2.14GB" - + " (maximum allowed size with current output mechanism)", - name); + LOG.warn("obs: {} capped to ~2.14GB" + " (maximum allowed size with current output mechanism)", name); return Integer.MAX_VALUE; } else { return (int) size; @@ -1580,12 +1440,10 @@ static int ensureOutputParameterInRange(final String name, * @param bucket bucket name. Must not be empty. * @return a (potentially) patched clone of the original. */ - static Configuration propagateBucketOptions(final Configuration source, - final String bucket) { + static Configuration propagateBucketOptions(final Configuration source, final String bucket) { Preconditions.checkArgument(StringUtils.isNotEmpty(bucket), "bucket"); - final String bucketPrefix = OBSConstants.FS_OBS_BUCKET_PREFIX + bucket - + '.'; + final String bucketPrefix = OBSConstants.FS_OBS_BUCKET_PREFIX + bucket + '.'; LOG.debug("Propagating entries under {}", bucketPrefix); final Configuration dest = new Configuration(source); for (Map.Entry entry : source) { @@ -1622,18 +1480,16 @@ static Configuration propagateBucketOptions(final Configuration source, * @param conf configuration to patch */ static void patchSecurityCredentialProviders(final Configuration conf) { - Collection customCredentials = - conf.getStringCollection( - OBSConstants.OBS_SECURITY_CREDENTIAL_PROVIDER_PATH); - Collection hadoopCredentials = conf.getStringCollection( - CREDENTIAL_PROVIDER_PATH); + Collection customCredentials = conf.getStringCollection( + OBSConstants.OBS_SECURITY_CREDENTIAL_PROVIDER_PATH); + Collection hadoopCredentials = conf.getStringCollection(CREDENTIAL_PROVIDER_PATH); if (!customCredentials.isEmpty()) { List all = Lists.newArrayList(customCredentials); all.addAll(hadoopCredentials); String joined = StringUtils.join(all, ','); LOG.debug("Setting {} to {}", CREDENTIAL_PROVIDER_PATH, joined); - conf.set(CREDENTIAL_PROVIDER_PATH, joined, "patch of " - + OBSConstants.OBS_SECURITY_CREDENTIAL_PROVIDER_PATH); + conf.set(CREDENTIAL_PROVIDER_PATH, joined, + "patch of " + OBSConstants.OBS_SECURITY_CREDENTIAL_PROVIDER_PATH); } } @@ -1643,21 +1499,16 @@ static void patchSecurityCredentialProviders(final Configuration conf) { * @param conf the configuration file used by filesystem * @throws IOException write buffer directory is not accessible */ - static void verifyBufferDirAccessible(final Configuration conf) - throws IOException { - String bufferDirKey = conf.get(OBSConstants.BUFFER_DIR) != null - ? OBSConstants.BUFFER_DIR : "hadoop.tmp.dir"; + static void verifyBufferDirAccessible(final Configuration conf) throws IOException { + String bufferDirKey = conf.get(OBSConstants.BUFFER_DIR) != null ? OBSConstants.BUFFER_DIR : "hadoop.tmp.dir"; String bufferDirs = conf.get(bufferDirKey); - String[] dirStrings = - org.apache.hadoop.util.StringUtils.getTrimmedStrings(bufferDirs); + String[] dirStrings = org.apache.hadoop.util.StringUtils.getTrimmedStrings(bufferDirs); if (dirStrings.length < 1) { - throw new AccessControlException("There is no write buffer dir " - + "for " + bufferDirKey - + ", user: " + System.getProperty("user.name")); + throw new AccessControlException( + "There is no write buffer dir " + "for " + bufferDirKey + ", user: " + System.getProperty("user.name")); } - LocalFileSystem localFs = - org.apache.hadoop.fs.FileSystem.getLocal(conf); + LocalFileSystem localFs = org.apache.hadoop.fs.FileSystem.getLocal(conf); for (String dir : dirStrings) { Path tmpDir = new Path(dir); if (localFs.mkdirs(tmpDir) || localFs.exists(tmpDir)) { @@ -1667,8 +1518,7 @@ static void verifyBufferDirAccessible(final Configuration conf) : new File(dir); DiskChecker.checkDir(tmpFile); } catch (DiskChecker.DiskErrorException e) { - throw new AccessControlException( - "user: " + System.getProperty("user.name") + ", " + e); + throw new AccessControlException("user: " + System.getProperty("user.name") + ", " + e); } } } @@ -1682,16 +1532,13 @@ static void verifyBufferDirAccessible(final Configuration conf) * @throws FileNotFoundException the bucket is absent * @throws IOException any other problem talking to OBS */ - static void verifyBucketExists(final OBSFileSystem owner) - throws FileNotFoundException, IOException { + static void verifyBucketExists(final OBSFileSystem owner) throws FileNotFoundException, IOException { try { if (!innerVerifyBucketExists(owner)) { - throw new FileNotFoundException( - "Bucket " + owner.getBucket() + " does not exist"); + throw new FileNotFoundException("Bucket " + owner.getBucket() + " does not exist"); } } catch (IOException e) { - LOG.error("Failed to head bucket for [{}] , exception [{}]", - owner.getBucket(), e); + LOG.error("Failed to head bucket for [{}] , exception [{}]", owner.getBucket(), e); throw e; } } @@ -1702,21 +1549,17 @@ static void verifyBucketExists(final OBSFileSystem owner) * @param owner the owner OBSFileSystem instance * @return boolean whether the bucket exists */ - private static boolean innerVerifyBucketExists(final OBSFileSystem owner) - throws IOException { + private static boolean innerVerifyBucketExists(final OBSFileSystem owner) throws IOException { long delayMs; int retryTime = 0; long startTime = System.currentTimeMillis(); - while (System.currentTimeMillis() - startTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + while (System.currentTimeMillis() - startTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { try { return owner.getObsClient().headBucket(owner.getBucket()); } catch (ObsException e) { - IOException ioException = OBSCommonUtils.translateException( - "verifyBucketExists", owner.getBucket(), e); - LOG.debug("Failed to head bucket for [{}], retry time [{}], " - + "exception [{}]", owner.getBucket(), retryTime, - ioException); + IOException ioException = OBSCommonUtils.translateException("verifyBucketExists", owner.getBucket(), e); + LOG.debug("Failed to head bucket for [{}], retry time [{}], " + "exception [{}]", owner.getBucket(), + retryTime, ioException); if (!(ioException instanceof OBSIOException)) { throw ioException; @@ -1743,54 +1586,43 @@ private static boolean innerVerifyBucketExists(final OBSFileSystem owner) * @param conf the configuration to use for the FS * @throws IOException on any failure to initialize multipart upload */ - static void initMultipartUploads(final OBSFileSystem owner, - final Configuration conf) - throws IOException { - boolean purgeExistingMultipart = - conf.getBoolean(OBSConstants.PURGE_EXISTING_MULTIPART, - OBSConstants.DEFAULT_PURGE_EXISTING_MULTIPART); - long purgeExistingMultipartAge = - longOption(conf, OBSConstants.PURGE_EXISTING_MULTIPART_AGE, - OBSConstants.DEFAULT_PURGE_EXISTING_MULTIPART_AGE, 0); + static void initMultipartUploads(final OBSFileSystem owner, final Configuration conf) throws IOException { + boolean purgeExistingMultipart = conf.getBoolean(OBSConstants.PURGE_EXISTING_MULTIPART, + OBSConstants.DEFAULT_PURGE_EXISTING_MULTIPART); + long purgeExistingMultipartAge = longOption(conf, OBSConstants.PURGE_EXISTING_MULTIPART_AGE, + OBSConstants.DEFAULT_PURGE_EXISTING_MULTIPART_AGE, 0); if (!purgeExistingMultipart) { return; } - final Date purgeBefore = new Date( - new Date().getTime() - purgeExistingMultipartAge * 1000); + final Date purgeBefore = new Date(new Date().getTime() - purgeExistingMultipartAge * 1000); try { - ListMultipartUploadsRequest request - = new ListMultipartUploadsRequest(owner.getBucket()); + ListMultipartUploadsRequest request = new ListMultipartUploadsRequest(owner.getBucket()); while (true) { // List + purge - MultipartUploadListing uploadListing = owner.getObsClient() - .listMultipartUploads(request); - for (MultipartUpload upload - : uploadListing.getMultipartTaskList()) { + MultipartUploadListing uploadListing = owner.getObsClient().listMultipartUploads(request); + for (MultipartUpload upload : uploadListing.getMultipartTaskList()) { if (upload.getInitiatedDate().compareTo(purgeBefore) < 0) { - owner.getObsClient().abortMultipartUpload( - new AbortMultipartUploadRequest( - owner.getBucket(), upload.getObjectKey(), - upload.getUploadId())); + owner.getObsClient() + .abortMultipartUpload( + new AbortMultipartUploadRequest(owner.getBucket(), upload.getObjectKey(), + upload.getUploadId())); } } if (!uploadListing.isTruncated()) { break; } - request.setUploadIdMarker( - uploadListing.getNextUploadIdMarker()); + request.setUploadIdMarker(uploadListing.getNextUploadIdMarker()); request.setKeyMarker(uploadListing.getNextKeyMarker()); } } catch (ObsException e) { if (e.getResponseCode() == FORBIDDEN_CODE) { - LOG.debug("Failed to purging multipart uploads against {}," - + " FS may be read only", owner.getBucket(), + LOG.debug("Failed to purging multipart uploads against {}," + " FS may be read only", owner.getBucket(), e); } else { - throw translateException("purging multipart uploads", - owner.getBucket(), e); + throw translateException("purging multipart uploads", owner.getBucket(), e); } } } @@ -1820,20 +1652,17 @@ static void shutdownAll(final ExecutorService... executors) { * @throws FileNotFoundException when the path does not exist * @throws IOException on other problems */ - static FileStatus innerGetFileStatusWithRetry(final OBSFileSystem owner, - final Path f) + static FileStatus innerGetFileStatusWithRetry(final OBSFileSystem owner, final Path f) throws FileNotFoundException, IOException { long delayMs; int retryTime = 0; long startTime = System.currentTimeMillis(); - while (System.currentTimeMillis() - startTime - <= MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + while (System.currentTimeMillis() - startTime <= MAX_TIME_IN_MILLISECONDS_TO_RETRY) { try { return owner.innerGetFileStatus(f); } catch (OBSIOException e) { - OBSFileSystem.LOG.debug( - "Failed to get file status for [{}], retry time [{}], " - + "exception [{}]", f, retryTime, e); + OBSFileSystem.LOG.debug("Failed to get file status for [{}], retry time [{}], " + "exception [{}]", f, + retryTime, e); delayMs = getSleepTimeInMs(retryTime); retryTime++; @@ -1848,31 +1677,24 @@ static FileStatus innerGetFileStatusWithRetry(final OBSFileSystem owner, return owner.innerGetFileStatus(f); } - static long getSleepTimeInMs(final int retryTime) { - long sleepTime = OBSCommonUtils.MIN_TIME_IN_MILLISECONDS_TO_SLEEP - * (long) ((int) Math.pow( - OBSCommonUtils.VARIABLE_BASE_OF_POWER_FUNCTION, - retryTime)); + public static long getSleepTimeInMs(final int retryTime) { + long sleepTime = OBSCommonUtils.MIN_TIME_IN_MILLISECONDS_TO_SLEEP * (long) ((int) Math.pow( + OBSCommonUtils.VARIABLE_BASE_OF_POWER_FUNCTION, retryTime)); sleepTime = sleepTime > OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_SLEEP ? OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_SLEEP : sleepTime; return sleepTime; } - static void setMetricsInfo(OBSFileSystem fs, - BasicMetricsConsumer.MetricRecord record) { + public static void setMetricsInfo(OBSFileSystem fs, BasicMetricsConsumer.MetricRecord record) { long startTime = System.currentTimeMillis(); fs.getMetricsConsumer().putMetrics(record); long endTime = System.currentTimeMillis(); long costTime = (endTime - startTime) / 1000; - if (costTime >= fs.getInvokeCountThreshold() - && !(fs.getMetricsConsumer() instanceof DefaultMetricsConsumer)) { - LOG.warn("putMetrics cosetTime too much:opType: {},opName: {} " - + "costTime: {}", - record.getOpType(), - record.getOpName(), - costTime); + if (costTime >= fs.getInvokeCountThreshold() && !(fs.getMetricsConsumer() instanceof DefaultMetricsConsumer)) { + LOG.warn("putMetrics cosetTime too much:opType: {},opName: {} " + "costTime: {}", record.getOpType(), + record.getOpName(), costTime); } } } diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSConstants.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSConstants.java index eb27410..144df0d 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSConstants.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSConstants.java @@ -20,6 +20,7 @@ import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.fs.obs.input.OBSInputStream; /** * All constants used by {@link OBSFileSystem}. @@ -30,39 +31,36 @@ */ @InterfaceAudience.Public @InterfaceStability.Evolving -final class OBSConstants { +public final class OBSConstants { /** * Minimum multipart size which OBS supports. */ - static final int MULTIPART_MIN_SIZE = 5 * 1024 * 1024; + public static final int MULTIPART_MIN_SIZE = 5 * 1024 * 1024; /** * OBS access key. */ - static final String ACCESS_KEY = "fs.obs.access.key"; + public static final String ACCESS_KEY = "fs.obs.access.key"; /** * OBS secret key. */ - static final String SECRET_KEY = "fs.obs.secret.key"; + public static final String SECRET_KEY = "fs.obs.secret.key"; /** * OBS credentials provider. */ - static final String OBS_CREDENTIALS_PROVIDER - = "fs.obs.credentials.provider"; + static final String OBS_CREDENTIALS_PROVIDER = "fs.obs.credentials.provider"; /** * OBS metrics consumer. */ - static final String OBS_METRICS_CONSUMER - = "fs.obs.metrics.consumer"; + static final String OBS_METRICS_CONSUMER = "fs.obs.metrics.consumer"; /** * Default value of {@link #OBS_METRICS_CONSUMER}. */ - static final Class - DEFAULT_OBS_METRICS_CONSUMER = DefaultMetricsConsumer.class; + static final Class DEFAULT_OBS_METRICS_CONSUMER = DefaultMetricsConsumer.class; /** * OBS client security provider. @@ -74,14 +72,12 @@ final class OBSConstants { * {@code "hadoop.security.credential.provider.path"}. This extra option * allows for per-bucket overrides. */ - static final String OBS_SECURITY_CREDENTIAL_PROVIDER_PATH = - "fs.obs.security.credential.provider.path"; + static final String OBS_SECURITY_CREDENTIAL_PROVIDER_PATH = "fs.obs.security.credential.provider.path"; /** * Switch for whether need to verify buffer dir accessibility. */ - static final String VERIFY_BUFFER_DIR_ACCESSIBLE_ENABLE = - "fs.obs.bufferdir.verify.enable"; + static final String VERIFY_BUFFER_DIR_ACCESSIBLE_ENABLE = "fs.obs.bufferdir.verify.enable"; /** * Session token for when using TemporaryOBSCredentialsProvider. @@ -111,7 +107,7 @@ final class OBSConstants { /** * Use a custom endpoint. */ - static final String ENDPOINT = "fs.obs.endpoint"; + public static final String ENDPOINT = "fs.obs.endpoint"; /** * Host for connecting to OBS through proxy server. @@ -166,8 +162,7 @@ final class OBSConstants { /** * Seconds until we give up trying to establish a connection to obs. */ - static final String ESTABLISH_TIMEOUT - = "fs.obs.connection.establish.timeout"; + static final String ESTABLISH_TIMEOUT = "fs.obs.connection.establish.timeout"; /** * Default value of {@link #ESTABLISH_TIMEOUT}. @@ -274,7 +269,7 @@ final class OBSConstants { /** * Size of each of or multipart pieces in bytes. */ - static final String MULTIPART_SIZE = "fs.obs.multipart.size"; + public static final String MULTIPART_SIZE = "fs.obs.multipart.size"; /** * Default value of {@link #MULTIPART_SIZE}. @@ -290,8 +285,7 @@ final class OBSConstants { * Max number of objects in one multi-object delete call. This option takes * effect only when the option 'ENABLE_MULTI_DELETE' is set to 'true'. */ - static final String MULTI_DELETE_MAX_NUMBER - = "fs.obs.multiobjectdelete.maximum"; + static final String MULTI_DELETE_MAX_NUMBER = "fs.obs.multiobjectdelete.maximum"; /** * Default value of {@link #MULTI_DELETE_MAX_NUMBER}. @@ -301,8 +295,7 @@ final class OBSConstants { /** * Minimum number of objects in one multi-object delete call. */ - static final String MULTI_DELETE_THRESHOLD - = "fs.obs.multiobjectdelete.threshold"; + static final String MULTI_DELETE_THRESHOLD = "fs.obs.multiobjectdelete.threshold"; /** * Default value of {@link #MULTI_DELETE_THRESHOLD}. @@ -317,14 +310,14 @@ final class OBSConstants { /** * Switch to the fast block-by-block upload mechanism. */ - static final String FAST_UPLOAD = "fs.obs.fast.upload"; + public static final String FAST_UPLOAD = "fs.obs.fast.upload"; /** * What buffer to use. Default is {@link #FAST_UPLOAD_BUFFER_DISK} Value: * {@value} */ @InterfaceStability.Unstable - static final String FAST_UPLOAD_BUFFER = "fs.obs.fast.upload.buffer"; + public static final String FAST_UPLOAD_BUFFER = "fs.obs.fast.upload.buffer"; /** * Buffer blocks to disk: {@value}. Capacity is limited to available disk @@ -337,7 +330,7 @@ final class OBSConstants { * Use an in-memory array. Fast but will run of heap rapidly: {@value}. */ @InterfaceStability.Unstable - static final String FAST_UPLOAD_BUFFER_ARRAY = "array"; + public static final String FAST_UPLOAD_BUFFER_ARRAY = "array"; /** * Use a byte buffer. May be more memory efficient than the {@link @@ -355,8 +348,7 @@ final class OBSConstants { *

Default is {@link #DEFAULT_FAST_UPLOAD_ACTIVE_BLOCKS} */ @InterfaceStability.Unstable - static final String FAST_UPLOAD_ACTIVE_BLOCKS - = "fs.obs.fast.upload.active.blocks"; + static final String FAST_UPLOAD_ACTIVE_BLOCKS = "fs.obs.fast.upload.active.blocks"; /** * Limit of queued block upload operations before writes block. Value: @@ -390,8 +382,7 @@ final class OBSConstants { /** * Purge any multipart uploads older than this number of seconds. */ - static final String PURGE_EXISTING_MULTIPART_AGE - = "fs.obs.multipart.purge.age"; + static final String PURGE_EXISTING_MULTIPART_AGE = "fs.obs.multipart.purge.age"; /** * Default value of {@link #PURGE_EXISTING_MULTIPART_AGE}. @@ -439,15 +430,32 @@ final class OBSConstants { */ static final String USER_AGENT_PREFIX = "fs.obs.user.agent.prefix"; + /** + * what read policy to use. Default is {@link #READAHEAD_POLICY_PRIMARY} Value: + * {@value} + */ + @InterfaceStability.Unstable + public static final String READAHEAD_POLICY = "fs.obs.readahead.policy"; + + @InterfaceStability.Unstable + public static final String READAHEAD_POLICY_PRIMARY = "primary"; + + @InterfaceStability.Unstable + public static final String READAHEAD_POLICY_ADVANCE = "advance"; + /** * Read ahead buffer size to prevent connection re-establishments. */ - static final String READAHEAD_RANGE = "fs.obs.readahead.range"; + public static final String READAHEAD_RANGE = "fs.obs.readahead.range"; /** * Default value of {@link #READAHEAD_RANGE}. */ - static final long DEFAULT_READAHEAD_RANGE = 1024 * 1024; + public static final long DEFAULT_READAHEAD_RANGE = 1024 * 1024; + + public static final String READAHEAD_MAX_NUM = "fs.obs.readahead.max.number"; + + public static final int DEFAULT_READAHEAD_MAX_NUM = 4; /** * Flag indicating if @@ -456,24 +464,21 @@ final class OBSConstants { * {@link org.apache.hadoop.fs.FSInputStream#read(long, * byte[], int, int)}. */ - static final String READ_TRANSFORM_ENABLE = "fs.obs.read.transform.enable"; + public static final String READAHEAD_TRANSFORM_ENABLE = "fs.obs.read.transform.enable"; /** * OBS client factory implementation class. */ @InterfaceAudience.Private @InterfaceStability.Unstable - static final String OBS_CLIENT_FACTORY_IMPL - = "fs.obs.client.factory.impl"; + static final String OBS_CLIENT_FACTORY_IMPL = "fs.obs.client.factory.impl"; /** * Default value of {@link #OBS_CLIENT_FACTORY_IMPL}. */ @InterfaceAudience.Private @InterfaceStability.Unstable - static final Class - DEFAULT_OBS_CLIENT_FACTORY_IMPL = - DefaultOBSClientFactory.class; + static final Class DEFAULT_OBS_CLIENT_FACTORY_IMPL = DefaultOBSClientFactory.class; /** * Maximum number of partitions in a multipart upload: {@value}. @@ -526,8 +531,7 @@ final class OBSConstants { /** * Verify response content type. */ - static final String VERIFY_RESPONSE_CONTENT_TYPE - = "fs.obs.verify.response.content.type"; + static final String VERIFY_RESPONSE_CONTENT_TYPE = "fs.obs.verify.response.content.type"; /** * Default value of {@link #VERIFY_RESPONSE_CONTENT_TYPE}. @@ -537,8 +541,7 @@ final class OBSConstants { /** * UploadStreamRetryBufferSize. */ - static final String UPLOAD_STREAM_RETRY_SIZE - = "fs.obs.upload.stream.retry.buffer.size"; + static final String UPLOAD_STREAM_RETRY_SIZE = "fs.obs.upload.stream.retry.buffer.size"; /** * Default value of {@link #UPLOAD_STREAM_RETRY_SIZE}. @@ -578,8 +581,7 @@ final class OBSConstants { /** * Strict host name verification. */ - static final String STRICT_HOSTNAME_VERIFICATION - = "fs.obs.strict.hostname.verification"; + static final String STRICT_HOSTNAME_VERIFICATION = "fs.obs.strict.hostname.verification"; /** * Default value of {@link #STRICT_HOSTNAME_VERIFICATION}. @@ -634,8 +636,7 @@ final class OBSConstants { /** * Capacity of list work queue. */ - static final String LIST_WORK_QUEUE_CAPACITY - = "fs.obs.list.workqueue.capacity"; + static final String LIST_WORK_QUEUE_CAPACITY = "fs.obs.list.workqueue.capacity"; /** * Default value of {@link #LIST_WORK_QUEUE_CAPACITY}. @@ -660,14 +661,12 @@ final class OBSConstants { /** * Enable obs content summary or not. */ - static final String OBS_CONTENT_SUMMARY_ENABLE - = "fs.obs.content.summary.enable"; + static final String OBS_CONTENT_SUMMARY_ENABLE = "fs.obs.content.summary.enable"; /** * Enable obs client dfs list or not. */ - static final String OBS_CLIENT_DFS_LIST_ENABLE - = "fs.obs.client.dfs.list.enable"; + static final String OBS_CLIENT_DFS_LIST_ENABLE = "fs.obs.client.dfs.list.enable"; /** * Default trash : false. @@ -692,20 +691,17 @@ final class OBSConstants { /** * Array first block size. */ - static final String FAST_UPLOAD_BUFFER_ARRAY_FIRST_BLOCK_SIZE - = "fs.obs.fast.upload.array.first.buffer"; + static final String FAST_UPLOAD_BUFFER_ARRAY_FIRST_BLOCK_SIZE = "fs.obs.fast.upload.array.first.buffer"; /** * The fast upload buffer array first block default size. */ - static final int FAST_UPLOAD_BUFFER_ARRAY_FIRST_BLOCK_SIZE_DEFAULT = 1024 - * 1024; + public static final int FAST_UPLOAD_BUFFER_ARRAY_FIRST_BLOCK_SIZE_DEFAULT = 1024 * 1024; /** * Auth Type Negotiation Enable Switch. */ - static final String SDK_AUTH_TYPE_NEGOTIATION_ENABLE - = "fs.obs.authtype.negotiation.enable"; + static final String SDK_AUTH_TYPE_NEGOTIATION_ENABLE = "fs.obs.authtype.negotiation.enable"; /** * Default value of {@link #SDK_AUTH_TYPE_NEGOTIATION_ENABLE}. @@ -715,8 +711,7 @@ final class OBSConstants { /** * Okhttp retryOnConnectionFailure switch. */ - static final String SDK_RETRY_ON_CONNECTION_FAILURE_ENABLE - = "fs.obs.connection.retry.enable"; + static final String SDK_RETRY_ON_CONNECTION_FAILURE_ENABLE = "fs.obs.connection.retry.enable"; /** * Default value of {@link #SDK_RETRY_ON_CONNECTION_FAILURE_ENABLE}. @@ -727,8 +722,7 @@ final class OBSConstants { * Sdk max retry times on unexpected end of stream. exception, default: -1, * don't retry */ - static final String SDK_RETRY_TIMES_ON_UNEXPECTED_END_EXCEPTION - = "fs.obs.unexpectedend.retrytime"; + static final String SDK_RETRY_TIMES_ON_UNEXPECTED_END_EXCEPTION = "fs.obs.unexpectedend.retrytime"; /** * Default value of {@link #SDK_RETRY_TIMES_ON_UNEXPECTED_END_EXCEPTION}. @@ -743,14 +737,31 @@ final class OBSConstants { /** * Whether to implement getCanonicalServiceName switch. */ - static final String GET_CANONICAL_SERVICE_NAME_ENABLE = - "fs.obs.getcanonicalservicename.enable"; + static final String GET_CANONICAL_SERVICE_NAME_ENABLE = "fs.obs.getcanonicalservicename.enable"; /** * Default value of {@link #GET_CANONICAL_SERVICE_NAME_ENABLE}. */ static final boolean DEFAULT_GET_CANONICAL_SERVICE_NAME_ENABLE = false; + static final String MAX_TIME_IN_MILLISECOND_TO_RETRY = "fs.obs.retry.maxtime"; + + /** + * Default value of {@link #MAX_TIME_IN_MILLISECOND_TO_RETRY} + */ + static final long DEFAULT_TIME_IN_MILLISECOND_TO_RETRY = 180000; + + /** + * File visibility after create interface switch. + */ + static final String FILE_VISIBILITY_AFTER_CREATE_ENABLE = + "fs.obs.file.visibility.enable"; + + /** + * Default value of {@link #FILE_VISIBILITY_AFTER_CREATE_ENABLE}. + */ + static final boolean DEFAULT_FILE_VISIBILITY_AFTER_CREATE_ENABLE = false; + /** * Second to millisecond factor. */ diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSDataBlocks.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSDataBlocks.java index b45c139..bb2b061 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSDataBlocks.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSDataBlocks.java @@ -50,8 +50,7 @@ final class OBSDataBlocks { /** * Class logger. */ - private static final Logger LOG = LoggerFactory.getLogger( - OBSDataBlocks.class); + private static final Logger LOG = LoggerFactory.getLogger(OBSDataBlocks.class); private OBSDataBlocks() { } @@ -66,13 +65,10 @@ private OBSDataBlocks() { * @throws NullPointerException for a null buffer * @throws IndexOutOfBoundsException if indices are out of range */ - static void validateWriteArgs(final byte[] b, final int off, - final int len) { + static void validateWriteArgs(final byte[] b, final int off, final int len) { Preconditions.checkNotNull(b); - if (off < 0 || off > b.length || len < 0 || off + len > b.length - || off + len < 0) { - throw new IndexOutOfBoundsException( - "write (b[" + b.length + "], " + off + ", " + len + ')'); + if (off < 0 || off > b.length || len < 0 || off + len > b.length || off + len < 0) { + throw new IndexOutOfBoundsException("write (b[" + b.length + "], " + off + ", " + len + ')'); } } @@ -84,8 +80,7 @@ static void validateWriteArgs(final byte[] b, final int off, * @return the factory, ready to be initialized. * @throws IllegalArgumentException if the name is unknown. */ - static BlockFactory createFactory(final OBSFileSystem owner, - final String name) { + static BlockFactory createFactory(final OBSFileSystem owner, final String name) { switch (name) { case OBSConstants.FAST_UPLOAD_BUFFER_ARRAY: return new ByteArrayBlockFactory(owner); @@ -94,8 +89,7 @@ static BlockFactory createFactory(final OBSFileSystem owner, case OBSConstants.FAST_UPLOAD_BYTEBUFFER: return new ByteBufferBlockFactory(owner); default: - throw new IllegalArgumentException( - "Unsupported block buffer" + " \"" + name + '"'); + throw new IllegalArgumentException("Unsupported block buffer" + " \"" + name + '"'); } } @@ -159,8 +153,7 @@ protected DataBlock(final long dataIndex) { * @throws IllegalStateException if the current state is not as * expected */ - protected final synchronized void enterState(final DestState current, - final DestState next) + protected final synchronized void enterState(final DestState current, final DestState next) throws IllegalStateException { verifyState(current); LOG.debug("{}: entering state {}", this, next); @@ -173,12 +166,10 @@ protected final synchronized void enterState(final DestState current, * @param expected expected state. * @throws IllegalStateException if the DataBlock is in the wrong state */ - protected final void verifyState(final DestState expected) - throws IllegalStateException { + protected final void verifyState(final DestState expected) throws IllegalStateException { if (expected != null && state != expected) { throw new IllegalStateException( - "Expected stream state " + expected - + " -but actual state is " + state + " in " + this); + "Expected stream state " + expected + " -but actual state is " + state + " in " + this); } } @@ -238,14 +229,12 @@ boolean hasData() { * @return number of bytes written * @throws IOException trouble */ - int write(final byte[] buffer, final int offset, final int length) - throws IOException { + int write(final byte[] buffer, final int offset, final int length) throws IOException { verifyState(DestState.Writing); Preconditions.checkArgument(buffer != null, "Null buffer"); Preconditions.checkArgument(length >= 0, "length is negative"); Preconditions.checkArgument(offset >= 0, "offset is negative"); - Preconditions.checkArgument( - !(buffer.length - offset < length), + Preconditions.checkArgument(!(buffer.length - offset < length), "buffer shorter than amount of data to write"); return 0; } @@ -335,8 +324,7 @@ static class ByteArrayBlockFactory extends BlockFactory { DataBlock create(final long index, final int limit) { int firstBlockSize = super.owner.getConf() .getInt(OBSConstants.FAST_UPLOAD_BUFFER_ARRAY_FIRST_BLOCK_SIZE, - OBSConstants - .FAST_UPLOAD_BUFFER_ARRAY_FIRST_BLOCK_SIZE_DEFAULT); + OBSConstants.FAST_UPLOAD_BUFFER_ARRAY_FIRST_BLOCK_SIZE_DEFAULT); return new ByteArrayBlock(0, limit, firstBlockSize); } } @@ -355,8 +343,7 @@ static class OBSByteArrayOutputStream extends ByteArrayOutputStream { * @return input stream */ ByteArrayInputStream getInputStream() { - ByteArrayInputStream bin = new ByteArrayInputStream(this.buf, 0, - count); + ByteArrayInputStream bin = new ByteArrayInputStream(this.buf, 0, count); this.reset(); this.buf = null; return bin; @@ -399,8 +386,7 @@ static class ByteArrayBlock extends DataBlock { */ private ByteArrayInputStream inputStream = null; - ByteArrayBlock(final long index, final int limitBlockSize, - final int blockSize) { + ByteArrayBlock(final long index, final int limitBlockSize, final int blockSize) { super(index); this.limit = limitBlockSize; this.buffer = new OBSByteArrayOutputStream(blockSize); @@ -446,8 +432,7 @@ int remainingCapacity() { } @Override - int write(final byte[] b, final int offset, final int len) - throws IOException { + int write(final byte[] b, final int offset, final int len) throws IOException { super.write(b, offset, len); int written = Math.min(remainingCapacity(), len); buffer.write(b, offset, written); @@ -469,16 +454,8 @@ protected void innerClose() throws IOException { @Override public String toString() { - return "ByteArrayBlock{" - + "index=" - + getIndex() - + ", state=" - + getState() - + ", limit=" - + limit - + ", dataSize=" - + dataSize - + '}'; + return "ByteArrayBlock{" + "index=" + getIndex() + ", state=" + getState() + ", limit=" + limit + + ", dataSize=" + dataSize + '}'; } } @@ -491,14 +468,12 @@ static class ByteBufferBlockFactory extends BlockFactory { /** * The directory buffer pool. */ - private static final DirectBufferPool BUFFER_POOL - = new DirectBufferPool(); + private static final DirectBufferPool BUFFER_POOL = new DirectBufferPool(); /** * Count of outstanding buffers. */ - private static final AtomicInteger BUFFERS_OUTSTANDING - = new AtomicInteger(0); + private static final AtomicInteger BUFFERS_OUTSTANDING = new AtomicInteger(0); ByteBufferBlockFactory(final OBSFileSystem owner) { super(owner); @@ -532,8 +507,7 @@ public int getOutstandingBufferCount() { @Override public String toString() { - return "ByteBufferBlockFactory{" + "buffersOutstanding=" - + BUFFERS_OUTSTANDING + '}'; + return "ByteBufferBlockFactory{" + "buffersOutstanding=" + BUFFERS_OUTSTANDING + '}'; } } @@ -610,8 +584,7 @@ private int bufferCapacityUsed() { } @Override - int write(final byte[] b, final int offset, final int len) - throws IOException { + int write(final byte[] b, final int offset, final int len) throws IOException { super.write(b, offset, len); int written = Math.min(remainingCapacity(), len); blockBuffer.put(b, offset, written); @@ -635,18 +608,8 @@ protected void innerClose() { @Override public String toString() { - return "ByteBufferBlock{" - + "index=" - + getIndex() - + ", state=" - + getState() - + ", dataSize=" - + dataSize() - + ", limit=" - + bufferSize - + ", remainingCapacity=" - + remainingCapacity() - + '}'; + return "ByteBufferBlock{" + "index=" + getIndex() + ", state=" + getState() + ", dataSize=" + dataSize() + + ", limit=" + bufferSize + ", remainingCapacity=" + remainingCapacity() + '}'; } /** @@ -666,10 +629,8 @@ class ByteBufferInputStream extends InputStream { */ private ByteBuffer byteBuffer; - ByteBufferInputStream(final int streamSize, - final ByteBuffer streamByteBuffer) { - LOG.debug("Creating ByteBufferInputStream of size {}", - streamSize); + ByteBufferInputStream(final int streamSize, final ByteBuffer streamByteBuffer) { + LOG.debug("Creating ByteBufferInputStream of size {}", streamSize); this.size = streamSize; this.byteBuffer = streamByteBuffer; } @@ -681,8 +642,7 @@ class ByteBufferInputStream extends InputStream { */ @Override public synchronized void close() { - LOG.debug("ByteBufferInputStream.close() for {}", - ByteBufferBlock.super.toString()); + LOG.debug("ByteBufferInputStream.close() for {}", ByteBufferBlock.super.toString()); byteBuffer = null; } @@ -706,16 +666,14 @@ public synchronized int read() { } @Override - public synchronized long skip(final long offset) - throws IOException { + public synchronized long skip(final long offset) throws IOException { verifyOpen(); long newPos = position() + offset; if (newPos < 0) { throw new EOFException(FSExceptionMessages.NEGATIVE_SEEK); } if (newPos > size) { - throw new EOFException( - FSExceptionMessages.CANNOT_SEEK_PAST_EOF); + throw new EOFException(FSExceptionMessages.CANNOT_SEEK_PAST_EOF); } byteBuffer.position((int) newPos); return newPos; @@ -723,8 +681,7 @@ public synchronized long skip(final long offset) @Override public synchronized int available() { - Preconditions.checkState(byteBuffer != null, - FSExceptionMessages.STREAM_IS_CLOSED); + Preconditions.checkState(byteBuffer != null, FSExceptionMessages.STREAM_IS_CLOSED); return byteBuffer.remaining(); } @@ -775,20 +732,13 @@ public boolean markSupported() { * amount of data requested. * @throws IllegalArgumentException other arguments are invalid. */ - public synchronized int read(final byte[] b, final int offset, - final int length) - throws IOException { + public synchronized int read(final byte[] b, final int offset, final int length) throws IOException { Preconditions.checkArgument(length >= 0, "length is negative"); Preconditions.checkArgument(b != null, "Null buffer"); if (b.length - offset < length) { throw new IndexOutOfBoundsException( - FSExceptionMessages.TOO_MANY_BYTES_FOR_DEST_BUFFER - + ": request length =" - + length - + ", with offset =" - + offset - + "; buffer capacity =" - + (b.length - offset)); + FSExceptionMessages.TOO_MANY_BYTES_FOR_DEST_BUFFER + ": request length =" + length + + ", with offset =" + offset + "; buffer capacity =" + (b.length - offset)); } verifyOpen(); if (!hasRemaining()) { @@ -802,8 +752,7 @@ public synchronized int read(final byte[] b, final int offset, @Override public String toString() { - final StringBuilder sb = new StringBuilder( - "ByteBufferInputStream{"); + final StringBuilder sb = new StringBuilder("ByteBufferInputStream{"); sb.append("size=").append(size); ByteBuffer buf = this.byteBuffer; if (buf != null) { @@ -839,9 +788,7 @@ static class DiskBlockFactory extends BlockFactory { */ @Override DataBlock create(final long index, final int limit) throws IOException { - File destFile = createTmpFileForWrite( - String.format("obs-block-%04d-", index), limit, - getOwner().getConf()); + File destFile = createTmpFileForWrite(String.format("obs-block-%04d-", index), limit, getOwner().getConf()); return new DiskBlock(destFile, limit, index); } @@ -856,8 +803,7 @@ DataBlock create(final long index, final int limit) throws IOException { * @return a unique temporary file * @throws IOException IO problems */ - static synchronized File createTmpFileForWrite(final String pathStr, - final long size, final Configuration conf) + static File createTmpFileForWrite(final String pathStr, final long size, final Configuration conf) throws IOException { if (directoryAllocator == null) { String bufferDir = conf.get(OBSConstants.BUFFER_DIR) != null @@ -865,8 +811,7 @@ static synchronized File createTmpFileForWrite(final String pathStr, : "hadoop.tmp.dir"; directoryAllocator = new OBSLocalDirAllocator(bufferDir); } - return directoryAllocator.createTmpFileForWrite(pathStr, size, - conf); + return directoryAllocator.createTmpFileForWrite(pathStr, size, conf); } } @@ -901,14 +846,11 @@ static class DiskBlock extends DataBlock { */ private BufferedOutputStream out; - DiskBlock(final File destBufferFile, final int limitSize, - final long index) - throws FileNotFoundException { + DiskBlock(final File destBufferFile, final int limitSize, final long index) throws FileNotFoundException { super(index); this.limit = limitSize; this.bufferFile = destBufferFile; - out = new BufferedOutputStream( - new FileOutputStream(destBufferFile)); + out = new BufferedOutputStream(new FileOutputStream(destBufferFile)); } @Override @@ -927,8 +869,7 @@ int remainingCapacity() { } @Override - int write(final byte[] b, final int offset, final int len) - throws IOException { + int write(final byte[] b, final int offset, final int len) throws IOException { super.write(b, offset, len); int written = Math.min(remainingCapacity(), len); out.write(b, offset, written); @@ -960,18 +901,13 @@ protected void innerClose() { case Writing: if (bufferFile.exists()) { // file was not uploaded - LOG.debug( - "Block[{}]: Deleting buffer file as upload " - + "did not start", - getIndex()); + LOG.debug("Block[{}]: Deleting buffer file as upload " + "did not start", getIndex()); closeBlock(); } break; case Upload: - LOG.debug( - "Block[{}]: Buffer file {} exists —close upload stream", - getIndex(), bufferFile); + LOG.debug("Block[{}]: Buffer file {} exists —close upload stream", getIndex(), bufferFile); break; case Closed: @@ -997,9 +933,8 @@ void flush() throws IOException { @Override public String toString() { - return "FileBlock{index=" + getIndex() + ", destFile=" + bufferFile - + ", state=" + getState() + ", dataSize=" - + dataSize() + ", limit=" + limit + '}'; + return "FileBlock{index=" + getIndex() + ", destFile=" + bufferFile + ", state=" + getState() + + ", dataSize=" + dataSize() + ", limit=" + limit + '}'; } /** @@ -1010,12 +945,10 @@ void closeBlock() { LOG.debug("block[{}]: closeBlock()", getIndex()); if (!closed.getAndSet(true)) { if (!bufferFile.delete() && bufferFile.exists()) { - LOG.warn("delete({}) returned false", - bufferFile.getAbsoluteFile()); + LOG.warn("delete({}) returned false", bufferFile.getAbsoluteFile()); } } else { - LOG.debug("block[{}]: skipping re-entrant closeBlock()", - getIndex()); + LOG.debug("block[{}]: skipping re-entrant closeBlock()", getIndex()); } } } diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFileStatus.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFileStatus.java index 070f2b5..30dca8c 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFileStatus.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFileStatus.java @@ -50,8 +50,7 @@ class OBSFileStatus extends FileStatus { * @param path the path * @param owner the owner */ - OBSFileStatus(final Path path, final long modificationTime, - final String owner) { + OBSFileStatus(final Path path, final long modificationTime, final String owner) { super(0, true, 1, 0, modificationTime, path); setOwner(owner); setGroup(owner); @@ -65,11 +64,8 @@ class OBSFileStatus extends FileStatus { * @param path the path * @param owner the owner */ - OBSFileStatus(final Path path, final long modificationTime, - final long accessTime, - final String owner) { - super(0, true, 1, 0, modificationTime, accessTime, null, owner, owner, - path); + OBSFileStatus(final Path path, final long modificationTime, final long accessTime, final String owner) { + super(0, true, 1, 0, modificationTime, accessTime, null, owner, owner, path); } /** @@ -81,9 +77,7 @@ class OBSFileStatus extends FileStatus { * @param blockSize block size * @param owner owner */ - OBSFileStatus( - final long length, final long modificationTime, final Path path, - final long blockSize, + OBSFileStatus(final long length, final long modificationTime, final Path path, final long blockSize, final String owner) { super(length, false, 1, blockSize, modificationTime, path); setOwner(owner); diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFileSystem.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFileSystem.java index 504520a..5464922 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFileSystem.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFileSystem.java @@ -33,6 +33,7 @@ import org.apache.hadoop.fs.CreateFlag; import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.FSInputStream; import org.apache.hadoop.fs.FileAlreadyExistsException; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; @@ -42,6 +43,9 @@ import org.apache.hadoop.fs.Path; import org.apache.hadoop.fs.PathFilter; import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.obs.input.InputPolicyFactory; +import org.apache.hadoop.fs.obs.input.InputPolicys; +import org.apache.hadoop.fs.obs.input.OBSInputStream; import org.apache.hadoop.fs.permission.FsPermission; import org.apache.hadoop.security.AccessControlException; import org.apache.hadoop.security.UserGroupInformation; @@ -80,11 +84,11 @@ @InterfaceStability.Evolving public class OBSFileSystem extends FileSystem { //CHECKSTYLE:ON + /** * Class logger. */ - public static final Logger LOG = LoggerFactory.getLogger( - OBSFileSystem.class); + public static final Logger LOG = LoggerFactory.getLogger(OBSFileSystem.class); /** * Flag indicating if the filesystem instance is closed. @@ -223,6 +227,8 @@ public class OBSFileSystem extends FileSystem { */ private OBSDataBlocks.BlockFactory blockFactory; + private InputPolicyFactory inputPolicyFactory; + /** * Maximum Number of active blocks a single output stream can submit to * {@link #boundedMultipartUploadThreadPool}. @@ -271,6 +277,8 @@ public class OBSFileSystem extends FileSystem { */ private Map filesBeingWritten = new HashMap<>(); + private boolean enableFileVisibilityAfterCreate = false; + /** * Close all {@link FSDataOutputStream} opened by the owner {@link * OBSFileSystem}. @@ -329,95 +337,64 @@ boolean isFileBeingWritten(String file) { * ones before any use is made of the config. */ @Override - public void initialize(final URI name, final Configuration originalConf) - throws IOException { + public void initialize(final URI name, final Configuration originalConf) throws IOException { uri = URI.create(name.getScheme() + "://" + name.getAuthority()); bucket = name.getAuthority(); // clone the configuration into one with propagated bucket options - Configuration conf = OBSCommonUtils.propagateBucketOptions(originalConf, - bucket); + Configuration conf = OBSCommonUtils.propagateBucketOptions(originalConf, bucket); OBSCommonUtils.patchSecurityCredentialProviders(conf); super.initialize(name, conf); setConf(conf); try { - if (conf.getBoolean( - OBSConstants.VERIFY_BUFFER_DIR_ACCESSIBLE_ENABLE, false)) { + if (conf.getBoolean(OBSConstants.VERIFY_BUFFER_DIR_ACCESSIBLE_ENABLE, false)) { OBSCommonUtils.verifyBufferDirAccessible(conf); } - metricSwitch = conf.getBoolean(OBSConstants.METRICS_SWITCH, - OBSConstants.DEFAULT_METRICS_SWITCH); - invokeCountThreshold = conf.getInt(OBSConstants.METRICS_COUNT, - OBSConstants.DEFAULT_METRICS_COUNT); + metricSwitch = conf.getBoolean(OBSConstants.METRICS_SWITCH, OBSConstants.DEFAULT_METRICS_SWITCH); + invokeCountThreshold = conf.getInt(OBSConstants.METRICS_COUNT, OBSConstants.DEFAULT_METRICS_COUNT); // Username is the current user at the time the FS was instantiated. - shortUserName = UserGroupInformation.getCurrentUser() - .getShortUserName(); - workingDir = new Path("/user", shortUserName).makeQualified( - this.uri, - this.getWorkingDirectory()); - - Class obsClientFactoryClass = - conf.getClass( - OBSConstants.OBS_CLIENT_FACTORY_IMPL, - OBSConstants.DEFAULT_OBS_CLIENT_FACTORY_IMPL, - OBSClientFactory.class); - obs = ReflectionUtils.newInstance(obsClientFactoryClass, conf) - .createObsClient(name); + shortUserName = UserGroupInformation.getCurrentUser().getShortUserName(); + workingDir = new Path("/user", shortUserName).makeQualified(this.uri, this.getWorkingDirectory()); + + Class obsClientFactoryClass = conf.getClass( + OBSConstants.OBS_CLIENT_FACTORY_IMPL, OBSConstants.DEFAULT_OBS_CLIENT_FACTORY_IMPL, + OBSClientFactory.class); + obs = ReflectionUtils.newInstance(obsClientFactoryClass, conf).createObsClient(name); sse = new SseWrapper(conf); - Class metricsConsumerClass = - conf.getClass(OBSConstants.OBS_METRICS_CONSUMER, - OBSConstants.DEFAULT_OBS_METRICS_CONSUMER, - BasicMetricsConsumer.class); - if (!metricsConsumerClass.equals(DefaultMetricsConsumer.class) - || metricSwitch) { + Class metricsConsumerClass = conf.getClass( + OBSConstants.OBS_METRICS_CONSUMER, OBSConstants.DEFAULT_OBS_METRICS_CONSUMER, + BasicMetricsConsumer.class); + if (!metricsConsumerClass.equals(DefaultMetricsConsumer.class) || metricSwitch) { try { - Constructor cons = - metricsConsumerClass.getDeclaredConstructor(URI.class, - Configuration.class); - metricsConsumer = (BasicMetricsConsumer) cons.newInstance( - name, - conf); - } catch (NoSuchMethodException - | SecurityException - | IllegalAccessException - | InstantiationException - | InvocationTargetException e) { + Constructor cons = metricsConsumerClass.getDeclaredConstructor(URI.class, Configuration.class); + metricsConsumer = (BasicMetricsConsumer) cons.newInstance(name, conf); + } catch (NoSuchMethodException | SecurityException | IllegalAccessException | InstantiationException | InvocationTargetException e) { Throwable c = e.getCause() != null ? e.getCause() : e; - throw new IOException( - "From option " + OBSConstants.OBS_METRICS_CONSUMER, c); + throw new IOException("From option " + OBSConstants.OBS_METRICS_CONSUMER, c); } } OBSCommonUtils.verifyBucketExists(this); enablePosix = OBSCommonUtils.getBucketFsStatus(obs, bucket); - maxKeys = OBSCommonUtils.intOption(conf, - OBSConstants.MAX_PAGING_KEYS, - OBSConstants.DEFAULT_MAX_PAGING_KEYS, 1); + maxKeys = OBSCommonUtils.intOption(conf, OBSConstants.MAX_PAGING_KEYS, OBSConstants.DEFAULT_MAX_PAGING_KEYS, + 1); obsListing = new OBSListing(this); - partSize = OBSCommonUtils.getMultipartSizeProperty(conf, - OBSConstants.MULTIPART_SIZE, + partSize = OBSCommonUtils.getMultipartSizeProperty(conf, OBSConstants.MULTIPART_SIZE, OBSConstants.DEFAULT_MULTIPART_SIZE); // check but do not store the block size - blockSize = OBSCommonUtils.longBytesOption(conf, - OBSConstants.FS_OBS_BLOCK_SIZE, + blockSize = OBSCommonUtils.longBytesOption(conf, OBSConstants.FS_OBS_BLOCK_SIZE, OBSConstants.DEFAULT_FS_OBS_BLOCK_SIZE, 1); - enableMultiObjectDelete = conf.getBoolean( - OBSConstants.ENABLE_MULTI_DELETE, true); - maxEntriesToDelete = conf.getInt( - OBSConstants.MULTI_DELETE_MAX_NUMBER, + enableMultiObjectDelete = conf.getBoolean(OBSConstants.ENABLE_MULTI_DELETE, true); + maxEntriesToDelete = conf.getInt(OBSConstants.MULTI_DELETE_MAX_NUMBER, OBSConstants.DEFAULT_MULTI_DELETE_MAX_NUMBER); - obsContentSummaryEnable = conf.getBoolean( - OBSConstants.OBS_CONTENT_SUMMARY_ENABLE, true); - readAheadRange = OBSCommonUtils.longBytesOption(conf, - OBSConstants.READAHEAD_RANGE, + obsContentSummaryEnable = conf.getBoolean(OBSConstants.OBS_CONTENT_SUMMARY_ENABLE, true); + readAheadRange = OBSCommonUtils.longBytesOption(conf, OBSConstants.READAHEAD_RANGE, OBSConstants.DEFAULT_READAHEAD_RANGE, 0); - readTransformEnable = conf.getBoolean( - OBSConstants.READ_TRANSFORM_ENABLE, true); - multiDeleteThreshold = conf.getInt( - OBSConstants.MULTI_DELETE_THRESHOLD, + readTransformEnable = conf.getBoolean(OBSConstants.READAHEAD_TRANSFORM_ENABLE, true); + multiDeleteThreshold = conf.getInt(OBSConstants.MULTI_DELETE_THRESHOLD, OBSConstants.MULTI_DELETE_DEFAULT_THRESHOLD); initThreadPools(conf); @@ -428,43 +405,32 @@ public void initialize(final URI name, final Configuration originalConf) OBSCommonUtils.initMultipartUploads(this, conf); - String blockOutputBuffer = conf.getTrimmed( - OBSConstants.FAST_UPLOAD_BUFFER, + String blockOutputBuffer = conf.getTrimmed(OBSConstants.FAST_UPLOAD_BUFFER, OBSConstants.FAST_UPLOAD_BUFFER_DISK); - partSize = OBSCommonUtils.ensureOutputParameterInRange( - OBSConstants.MULTIPART_SIZE, partSize); + partSize = OBSCommonUtils.ensureOutputParameterInRange(OBSConstants.MULTIPART_SIZE, partSize); blockFactory = OBSDataBlocks.createFactory(this, blockOutputBuffer); - blockOutputActiveBlocks = - OBSCommonUtils.intOption(conf, - OBSConstants.FAST_UPLOAD_ACTIVE_BLOCKS, - OBSConstants.DEFAULT_FAST_UPLOAD_ACTIVE_BLOCKS, 1); - LOG.debug( - "Using OBSBlockOutputStream with buffer = {}; block={};" - + " queue limit={}", - blockOutputBuffer, - partSize, - blockOutputActiveBlocks); - - enableTrash = conf.getBoolean(OBSConstants.TRASH_ENABLE, - OBSConstants.DEFAULT_TRASH); + blockOutputActiveBlocks = OBSCommonUtils.intOption(conf, OBSConstants.FAST_UPLOAD_ACTIVE_BLOCKS, + OBSConstants.DEFAULT_FAST_UPLOAD_ACTIVE_BLOCKS, 1); + LOG.debug("Using OBSBlockOutputStream with buffer = {}; block={};" + " queue limit={}", blockOutputBuffer, + partSize, blockOutputActiveBlocks); + + String readPolicy = conf.getTrimmed(OBSConstants.READAHEAD_POLICY, OBSConstants.READAHEAD_POLICY_PRIMARY); + inputPolicyFactory = InputPolicys.createFactory(readPolicy); + + enableTrash = conf.getBoolean(OBSConstants.TRASH_ENABLE, OBSConstants.DEFAULT_TRASH); if (enableTrash) { if (!isFsBucket()) { - String errorMsg = String.format( - "The bucket [%s] is not posix. not supported for " - + "trash.", bucket); + String errorMsg = String.format("The bucket [%s] is not posix. not supported for " + "trash.", + bucket); LOG.warn(errorMsg); enableTrash = false; trashDir = null; } else { trashDir = conf.get(OBSConstants.TRASH_DIR); if (StringUtils.isEmpty(trashDir)) { - String errorMsg = - String.format( - "The trash feature(fs.obs.trash.enable) is " - + "enabled, but the " - + "configuration(fs.obs.trash.dir [%s]) " - + "is empty.", - trashDir); + String errorMsg = String.format( + "The trash feature(fs.obs.trash.enable) is " + "enabled, but the " + + "configuration(fs.obs.trash.dir [%s]) " + "is empty.", trashDir); LOG.error(errorMsg); throw new ObsException(errorMsg); } @@ -472,139 +438,95 @@ public void initialize(final URI name, final Configuration originalConf) trashDir = OBSCommonUtils.maybeAddTrailingSlash(trashDir); } } + OBSCommonUtils.setMaxTimeInMillisecondsToRetry(conf.getLong(OBSConstants.MAX_TIME_IN_MILLISECOND_TO_RETRY, + OBSConstants.DEFAULT_TIME_IN_MILLISECOND_TO_RETRY)); enableCanonicalServiceName = conf.getBoolean(OBSConstants.GET_CANONICAL_SERVICE_NAME_ENABLE, OBSConstants.DEFAULT_GET_CANONICAL_SERVICE_NAME_ENABLE); + enableFileVisibilityAfterCreate = + conf.getBoolean(OBSConstants.FILE_VISIBILITY_AFTER_CREATE_ENABLE, + OBSConstants.DEFAULT_FILE_VISIBILITY_AFTER_CREATE_ENABLE); } catch (ObsException e) { - throw OBSCommonUtils.translateException("initializing ", - new Path(name), e); + throw OBSCommonUtils.translateException("initializing ", new Path(name), e); } LOG.info("Finish initializing filesystem instance for uri: {}", uri); } private void initThreadPools(final Configuration conf) { - long keepAliveTime = OBSCommonUtils.longOption(conf, - OBSConstants.KEEPALIVE_TIME, + long keepAliveTime = OBSCommonUtils.longOption(conf, OBSConstants.KEEPALIVE_TIME, OBSConstants.DEFAULT_KEEPALIVE_TIME, 0); - int maxThreads = conf.getInt(OBSConstants.MAX_THREADS, - OBSConstants.DEFAULT_MAX_THREADS); + int maxThreads = conf.getInt(OBSConstants.MAX_THREADS, OBSConstants.DEFAULT_MAX_THREADS); if (maxThreads < 2) { - LOG.warn(OBSConstants.MAX_THREADS - + " must be at least 2: forcing to 2."); + LOG.warn(OBSConstants.MAX_THREADS + " must be at least 2: forcing to 2."); maxThreads = 2; } - int totalTasks = OBSCommonUtils.intOption(conf, - OBSConstants.MAX_TOTAL_TASKS, + int totalTasks = OBSCommonUtils.intOption(conf, OBSConstants.MAX_TOTAL_TASKS, OBSConstants.DEFAULT_MAX_TOTAL_TASKS, 1); - boundedMultipartUploadThreadPool = - BlockingThreadPoolExecutorService.newInstance( - maxThreads, - maxThreads + totalTasks, - keepAliveTime, - "obs-transfer-shared"); - - int maxDeleteThreads = conf.getInt(OBSConstants.MAX_DELETE_THREADS, - OBSConstants.DEFAULT_MAX_DELETE_THREADS); + boundedMultipartUploadThreadPool = BlockingThreadPoolExecutorService.newInstance(maxThreads, + maxThreads + totalTasks, keepAliveTime, "obs-transfer-shared"); + + int maxDeleteThreads = conf.getInt(OBSConstants.MAX_DELETE_THREADS, OBSConstants.DEFAULT_MAX_DELETE_THREADS); if (maxDeleteThreads < 2) { - LOG.warn(OBSConstants.MAX_DELETE_THREADS - + " must be at least 2: forcing to 2."); + LOG.warn(OBSConstants.MAX_DELETE_THREADS + " must be at least 2: forcing to 2."); maxDeleteThreads = 2; } int coreDeleteThreads = (int) Math.ceil(maxDeleteThreads / 2.0); - boundedDeleteThreadPool = - new ThreadPoolExecutor( - coreDeleteThreads, - maxDeleteThreads, - keepAliveTime, - TimeUnit.SECONDS, - new LinkedBlockingQueue<>(), - BlockingThreadPoolExecutorService.newDaemonThreadFactory( - "obs-delete-transfer-shared")); + boundedDeleteThreadPool = new ThreadPoolExecutor(coreDeleteThreads, maxDeleteThreads, keepAliveTime, + TimeUnit.SECONDS, new LinkedBlockingQueue<>(), + BlockingThreadPoolExecutorService.newDaemonThreadFactory("obs-delete-transfer-shared")); boundedDeleteThreadPool.allowCoreThreadTimeOut(true); if (enablePosix) { - obsClientDFSListEnable = conf.getBoolean( - OBSConstants.OBS_CLIENT_DFS_LIST_ENABLE, true); + obsClientDFSListEnable = conf.getBoolean(OBSConstants.OBS_CLIENT_DFS_LIST_ENABLE, true); if (obsClientDFSListEnable) { - int coreListThreads = conf.getInt( - OBSConstants.CORE_LIST_THREADS, + int coreListThreads = conf.getInt(OBSConstants.CORE_LIST_THREADS, OBSConstants.DEFAULT_CORE_LIST_THREADS); - int maxListThreads = conf.getInt(OBSConstants.MAX_LIST_THREADS, - OBSConstants.DEFAULT_MAX_LIST_THREADS); - int listWorkQueueCapacity = conf.getInt( - OBSConstants.LIST_WORK_QUEUE_CAPACITY, + int maxListThreads = conf.getInt(OBSConstants.MAX_LIST_THREADS, OBSConstants.DEFAULT_MAX_LIST_THREADS); + int listWorkQueueCapacity = conf.getInt(OBSConstants.LIST_WORK_QUEUE_CAPACITY, OBSConstants.DEFAULT_LIST_WORK_QUEUE_CAPACITY); - listParallelFactor = conf.getInt( - OBSConstants.LIST_PARALLEL_FACTOR, + listParallelFactor = conf.getInt(OBSConstants.LIST_PARALLEL_FACTOR, OBSConstants.DEFAULT_LIST_PARALLEL_FACTOR); if (listParallelFactor < 1) { - LOG.warn(OBSConstants.LIST_PARALLEL_FACTOR - + " must be at least 1: forcing to 1."); + LOG.warn(OBSConstants.LIST_PARALLEL_FACTOR + " must be at least 1: forcing to 1."); listParallelFactor = 1; } - boundedListThreadPool = - new ThreadPoolExecutor( - coreListThreads, - maxListThreads, - keepAliveTime, - TimeUnit.SECONDS, - new LinkedBlockingQueue<>(listWorkQueueCapacity), - BlockingThreadPoolExecutorService - .newDaemonThreadFactory( - "obs-list-transfer-shared")); + boundedListThreadPool = new ThreadPoolExecutor(coreListThreads, maxListThreads, keepAliveTime, + TimeUnit.SECONDS, new LinkedBlockingQueue<>(listWorkQueueCapacity), + BlockingThreadPoolExecutorService.newDaemonThreadFactory("obs-list-transfer-shared")); boundedListThreadPool.allowCoreThreadTimeOut(true); } } else { - int maxCopyThreads = conf.getInt(OBSConstants.MAX_COPY_THREADS, - OBSConstants.DEFAULT_MAX_COPY_THREADS); + int maxCopyThreads = conf.getInt(OBSConstants.MAX_COPY_THREADS, OBSConstants.DEFAULT_MAX_COPY_THREADS); if (maxCopyThreads < 2) { - LOG.warn(OBSConstants.MAX_COPY_THREADS - + " must be at least 2: forcing to 2."); + LOG.warn(OBSConstants.MAX_COPY_THREADS + " must be at least 2: forcing to 2."); maxCopyThreads = 2; } int coreCopyThreads = (int) Math.ceil(maxCopyThreads / 2.0); - boundedCopyThreadPool = - new ThreadPoolExecutor( - coreCopyThreads, - maxCopyThreads, - keepAliveTime, - TimeUnit.SECONDS, - new LinkedBlockingQueue<>(), - BlockingThreadPoolExecutorService.newDaemonThreadFactory( - "obs-copy-transfer-shared")); + boundedCopyThreadPool = new ThreadPoolExecutor(coreCopyThreads, maxCopyThreads, keepAliveTime, + TimeUnit.SECONDS, new LinkedBlockingQueue<>(), + BlockingThreadPoolExecutorService.newDaemonThreadFactory("obs-copy-transfer-shared")); boundedCopyThreadPool.allowCoreThreadTimeOut(true); - copyPartSize = OBSCommonUtils.longOption(conf, - OBSConstants.COPY_PART_SIZE, + copyPartSize = OBSCommonUtils.longOption(conf, OBSConstants.COPY_PART_SIZE, OBSConstants.DEFAULT_COPY_PART_SIZE, 0); if (copyPartSize > OBSConstants.MAX_COPY_PART_SIZE) { - LOG.warn( - "obs: {} capped to ~5GB (maximum allowed part size with " - + "current output mechanism)", + LOG.warn("obs: {} capped to ~5GB (maximum allowed part size with " + "current output mechanism)", OBSConstants.COPY_PART_SIZE); copyPartSize = OBSConstants.MAX_COPY_PART_SIZE; } - int maxCopyPartThreads = conf.getInt( - OBSConstants.MAX_COPY_PART_THREADS, + int maxCopyPartThreads = conf.getInt(OBSConstants.MAX_COPY_PART_THREADS, OBSConstants.DEFAULT_MAX_COPY_PART_THREADS); if (maxCopyPartThreads < 2) { - LOG.warn(OBSConstants.MAX_COPY_PART_THREADS - + " must be at least 2: forcing to 2."); + LOG.warn(OBSConstants.MAX_COPY_PART_THREADS + " must be at least 2: forcing to 2."); maxCopyPartThreads = 2; } int coreCopyPartThreads = (int) Math.ceil(maxCopyPartThreads / 2.0); - boundedCopyPartThreadPool = - new ThreadPoolExecutor( - coreCopyPartThreads, - maxCopyPartThreads, - keepAliveTime, - TimeUnit.SECONDS, - new LinkedBlockingQueue<>(), - BlockingThreadPoolExecutorService.newDaemonThreadFactory( - "obs-copy-part-transfer-shared")); + boundedCopyPartThreadPool = new ThreadPoolExecutor(coreCopyPartThreads, maxCopyPartThreads, keepAliveTime, + TimeUnit.SECONDS, new LinkedBlockingQueue<>(), + BlockingThreadPoolExecutorService.newDaemonThreadFactory("obs-copy-part-transfer-shared")); boundedCopyPartThreadPool.allowCoreThreadTimeOut(true); } } @@ -614,7 +536,7 @@ private void initThreadPools(final Configuration conf) { * * @return is it posix bucket */ - boolean isFsBucket() { + public boolean isFsBucket() { return enablePosix; } @@ -623,7 +545,7 @@ boolean isFsBucket() { * * @return is read transform enabled */ - boolean isReadTransformEnabled() { + public boolean isReadTransformEnabled() { return readTransformEnable; } @@ -634,8 +556,7 @@ boolean isReadTransformEnabled() { */ private void initCannedAcls(final Configuration conf) { // No canned acl in obs - String cannedACLName = conf.get(OBSConstants.CANNED_ACL, - OBSConstants.DEFAULT_CANNED_ACL); + String cannedACLName = conf.get(OBSConstants.CANNED_ACL, OBSConstants.DEFAULT_CANNED_ACL); if (!cannedACLName.isEmpty()) { switch (cannedACLName) { case "Private": @@ -648,20 +569,16 @@ private void initCannedAcls(final Configuration conf) { cannedACL = AccessControlList.REST_CANNED_PUBLIC_READ_WRITE; break; case "AuthenticatedRead": - cannedACL - = AccessControlList.REST_CANNED_AUTHENTICATED_READ; + cannedACL = AccessControlList.REST_CANNED_AUTHENTICATED_READ; break; case "LogDeliveryWrite": - cannedACL - = AccessControlList.REST_CANNED_LOG_DELIVERY_WRITE; + cannedACL = AccessControlList.REST_CANNED_LOG_DELIVERY_WRITE; break; case "BucketOwnerRead": cannedACL = AccessControlList.REST_CANNED_BUCKET_OWNER_READ; break; case "BucketOwnerFullControl": - cannedACL - = AccessControlList - .REST_CANNED_BUCKET_OWNER_FULL_CONTROL; + cannedACL = AccessControlList.REST_CANNED_BUCKET_OWNER_FULL_CONTROL; break; default: cannedACL = null; @@ -717,7 +634,7 @@ public int getDefaultPort() { * @return OBS client */ @VisibleForTesting - ObsClient getObsClient() { + public ObsClient getObsClient() { return obs; } @@ -772,8 +689,7 @@ protected URI canonicalizeUri(final URI rawUri) { * @throws IOException on any failure to open the file */ @Override - public FSDataInputStream open(final Path f, final int bufferSize) - throws IOException { + public FSDataInputStream open(final Path f, final int bufferSize) throws IOException { checkOpen(); long startTime = System.currentTimeMillis(); long endTime; @@ -784,10 +700,8 @@ public FSDataInputStream open(final Path f, final int bufferSize) } catch (FileConflictException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.OPEN, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.OPEN, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } throw new AccessControlException(e); @@ -796,25 +710,25 @@ public FSDataInputStream open(final Path f, final int bufferSize) if (fileStatus.isDirectory()) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.OPEN, - false, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.OPEN, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } - throw new FileNotFoundException( - "Can't open " + f + " because it is a directory"); + throw new FileNotFoundException("Can't open " + f + " because it is a directory"); } - FSDataInputStream fsDataInputStream = new FSDataInputStream( - new OBSInputStream(bucket, OBSCommonUtils.pathToKey(this, f), - fileStatus.getLen(), - obs, statistics, readAheadRange, this)); + // FSDataInputStream fsDataInputStream = new FSDataInputStream( + // new OBSInputStream(bucket, OBSCommonUtils.pathToKey(this, f), + // fileStatus.getLen(), + // obs, statistics, readAheadRange, this)); + + FSInputStream fsInputStream = inputPolicyFactory.create(this, bucket, OBSCommonUtils.pathToKey(this, f), + fileStatus.getLen(), statistics, boundedMultipartUploadThreadPool); + FSDataInputStream fsDataInputStream = new FSDataInputStream(fsInputStream); + endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.OPEN, - true, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.OPEN, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return fsDataInputStream; @@ -837,14 +751,8 @@ public FSDataInputStream open(final Path f, final int bufferSize) * @see #setPermission(Path, FsPermission) */ @Override - public FSDataOutputStream create( - final Path f, - final FsPermission permission, - final boolean overwrite, - final int bufferSize, - final short replication, - final long blkSize, - final Progressable progress) + public FSDataOutputStream create(final Path f, final FsPermission permission, final boolean overwrite, + final int bufferSize, final short replication, final long blkSize, final Progressable progress) throws IOException { checkOpen(); String key = OBSCommonUtils.pathToKey(this, f); @@ -859,11 +767,9 @@ public FSDataOutputStream create( } catch (FileConflictException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.OVERWRITE, - BasicMetricsConsumer.MetricRecord.CREATE, - false, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.OVERWRITE, BasicMetricsConsumer.MetricRecord.CREATE, false, + endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } @@ -874,11 +780,9 @@ public FSDataOutputStream create( if (status.isDirectory()) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.OVERWRITE, - BasicMetricsConsumer.MetricRecord.CREATE, - false, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.OVERWRITE, BasicMetricsConsumer.MetricRecord.CREATE, false, + endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } // path references a directory: automatic error @@ -887,11 +791,9 @@ public FSDataOutputStream create( if (!overwrite) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.OVERWRITE, - BasicMetricsConsumer.MetricRecord.CREATE, - false, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.OVERWRITE, BasicMetricsConsumer.MetricRecord.CREATE, false, + endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } // path references a file and overwrite is disabled @@ -904,29 +806,15 @@ public FSDataOutputStream create( exist = false; } - FSDataOutputStream outputStream = new FSDataOutputStream( - new OBSBlockOutputStream( - this, - key, - 0, - new SemaphoredDelegatingExecutor( - boundedMultipartUploadThreadPool, - blockOutputActiveBlocks, true), - false), + FSDataOutputStream outputStream = new FSDataOutputStream(new OBSBlockOutputStream(this, key, 0, + new SemaphoredDelegatingExecutor(boundedMultipartUploadThreadPool, blockOutputActiveBlocks, true), false), null); - if (!exist) { + if (enableFileVisibilityAfterCreate && !exist) { outputStream.close(); - outputStream = new FSDataOutputStream( - new OBSBlockOutputStream( - this, - key, - 0, - new SemaphoredDelegatingExecutor( - boundedMultipartUploadThreadPool, - blockOutputActiveBlocks, true), - false), - null); + outputStream = new FSDataOutputStream(new OBSBlockOutputStream(this, key, 0, + new SemaphoredDelegatingExecutor(boundedMultipartUploadThreadPool, blockOutputActiveBlocks, true), + false), null); } synchronized (filesBeingWritten) { @@ -935,11 +823,9 @@ public FSDataOutputStream create( endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.OVERWRITE, - BasicMetricsConsumer.MetricRecord.CREATE, - true, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.OVERWRITE, BasicMetricsConsumer.MetricRecord.CREATE, true, + endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return outputStream; @@ -988,28 +874,19 @@ OBSWriteOperationHelper getWriteHelper() { * @throws IOException io exception */ @Override - public FSDataOutputStream create( - final Path f, - final FsPermission permission, - final EnumSet flags, - final int bufferSize, - final short replication, - final long blkSize, - final Progressable progress, - final ChecksumOpt checksumOpt) - throws IOException { + public FSDataOutputStream create(final Path f, final FsPermission permission, final EnumSet flags, + final int bufferSize, final short replication, final long blkSize, final Progressable progress, + final ChecksumOpt checksumOpt) throws IOException { checkOpen(); long startTime = System.currentTimeMillis(); long endTime; - LOG.debug("create: Creating new file {}, flags:{}, isFsBucket:{}", f, - flags, isFsBucket()); + LOG.debug("create: Creating new file {}, flags:{}, isFsBucket:{}", f, flags, isFsBucket()); OBSCommonUtils.checkCreateFlag(flags); FSDataOutputStream outputStream; if (null != flags && flags.contains(CreateFlag.APPEND)) { if (!isFsBucket()) { throw new UnsupportedOperationException( - "non-posix bucket. Append is not supported by " - + "OBSFileSystem"); + "non-posix bucket. Append is not supported by " + "OBSFileSystem"); } String key = OBSCommonUtils.pathToKey(this, f); FileStatus status; @@ -1017,16 +894,13 @@ public FSDataOutputStream create( try { // get the status or throw an FNFE try { - status = OBSCommonUtils.innerGetFileStatusWithRetry(this, - f); + status = OBSCommonUtils.innerGetFileStatusWithRetry(this, f); } catch (FileConflictException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.FLAGS, - BasicMetricsConsumer.MetricRecord.CREATE, - false, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.FLAGS, BasicMetricsConsumer.MetricRecord.CREATE, false, + endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } throw new ParentNotDirectoryException(e.getMessage()); @@ -1036,44 +910,27 @@ public FSDataOutputStream create( if (status.isDirectory()) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.FLAGS, - BasicMetricsConsumer.MetricRecord.CREATE, - false, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.FLAGS, BasicMetricsConsumer.MetricRecord.CREATE, false, + endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } // path references a directory: automatic error throw new FileAlreadyExistsException(f + " is a directory"); } } catch (FileNotFoundException e) { - LOG.debug("FileNotFoundException, create: Creating new file {}", - f); + LOG.debug("FileNotFoundException, create: Creating new file {}", f); exist = false; } - outputStream = new FSDataOutputStream( - new OBSBlockOutputStream( - this, - key, - 0, - new SemaphoredDelegatingExecutor( - boundedMultipartUploadThreadPool, - blockOutputActiveBlocks, true), - true), - null); - if (!exist) { + outputStream = new FSDataOutputStream(new OBSBlockOutputStream(this, key, 0, + new SemaphoredDelegatingExecutor(boundedMultipartUploadThreadPool, blockOutputActiveBlocks, true), + true), null); + if (enableFileVisibilityAfterCreate && !exist) { outputStream.close(); - outputStream = new FSDataOutputStream( - new OBSBlockOutputStream( - this, - key, - 0, - new SemaphoredDelegatingExecutor( - boundedMultipartUploadThreadPool, - blockOutputActiveBlocks, true), - true), - null); + outputStream = new FSDataOutputStream(new OBSBlockOutputStream(this, key, 0, + new SemaphoredDelegatingExecutor(boundedMultipartUploadThreadPool, blockOutputActiveBlocks, true), + true), null); } synchronized (filesBeingWritten) { filesBeingWritten.put(key, outputStream); @@ -1081,31 +938,21 @@ public FSDataOutputStream create( endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.FLAGS, - BasicMetricsConsumer.MetricRecord.CREATE, - true, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.FLAGS, BasicMetricsConsumer.MetricRecord.CREATE, true, + endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return outputStream; } else { - outputStream = create( - f, - permission, - flags == null || flags.contains(CreateFlag.OVERWRITE), - bufferSize, - replication, - blkSize, - progress); + outputStream = create(f, permission, flags == null || flags.contains(CreateFlag.OVERWRITE), bufferSize, + replication, blkSize, progress); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.FLAGS, - BasicMetricsConsumer.MetricRecord.CREATE, - true, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.FLAGS, BasicMetricsConsumer.MetricRecord.CREATE, true, + endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return outputStream; @@ -1127,15 +974,9 @@ public FSDataOutputStream create( * @throws IOException IO failure */ @Override - public FSDataOutputStream createNonRecursive( - final Path path, - final FsPermission permission, - final EnumSet flags, - final int bufferSize, - final short replication, - final long blkSize, - final Progressable progress) - throws IOException { + public FSDataOutputStream createNonRecursive(final Path path, final FsPermission permission, + final EnumSet flags, final int bufferSize, final short replication, final long blkSize, + final Progressable progress) throws IOException { checkOpen(); long startTime = System.currentTimeMillis(); long endTime; @@ -1143,30 +984,19 @@ public FSDataOutputStream createNonRecursive( if (path.getParent() != null && !this.exists(path.getParent())) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.CREATE_NR, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.CREATE_NR, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } - throw new FileNotFoundException(path.toString() - + " parent directory not exist."); + throw new FileNotFoundException(path.toString() + " parent directory not exist."); } - FSDataOutputStream fsDataOutputStream = create( - path, - permission, - flags.contains(CreateFlag.OVERWRITE), - bufferSize, - replication, - blkSize, - progress); + FSDataOutputStream fsDataOutputStream = create(path, permission, flags.contains(CreateFlag.OVERWRITE), + bufferSize, replication, blkSize, progress); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.CREATE_NR, - true, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.CREATE_NR, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return fsDataOutputStream; @@ -1181,16 +1011,13 @@ public FSDataOutputStream createNonRecursive( * @throws IOException indicating that append is not supported */ @Override - public FSDataOutputStream append(final Path f, final int bufferSize, - final Progressable progress) + public FSDataOutputStream append(final Path f, final int bufferSize, final Progressable progress) throws IOException { checkOpen(); long startTime = System.currentTimeMillis(); long endTime; if (!isFsBucket()) { - throw new UnsupportedOperationException( - "non-posix bucket. Append is not supported " - + "by OBSFileSystem"); + throw new UnsupportedOperationException("non-posix bucket. Append is not supported " + "by OBSFileSystem"); } LOG.debug("append: Append file {}.", f); String key = OBSCommonUtils.pathToKey(this, f); @@ -1202,10 +1029,8 @@ public FSDataOutputStream append(final Path f, final int bufferSize, } catch (FileConflictException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.APPEND, - false, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.APPEND, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } throw new AccessControlException(e); @@ -1216,10 +1041,8 @@ public FSDataOutputStream append(final Path f, final int bufferSize, if (status.isDirectory()) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.APPEND, - false, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.APPEND, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } // path references a directory: automatic error @@ -1229,26 +1052,20 @@ public FSDataOutputStream append(final Path f, final int bufferSize, if (isFileBeingWritten(key)) { // AlreadyBeingCreatedException (on HDFS NameNode) is transformed // into IOException (on HDFS Client) - throw new IOException( - "Cannot append " + f + " that is being written."); + throw new IOException("Cannot append " + f + " that is being written."); } - FSDataOutputStream outputStream = new FSDataOutputStream( - new OBSBlockOutputStream(this, key, objectLen, - new SemaphoredDelegatingExecutor( - boundedMultipartUploadThreadPool, - blockOutputActiveBlocks, true), - true), null); + FSDataOutputStream outputStream = new FSDataOutputStream(new OBSBlockOutputStream(this, key, objectLen, + new SemaphoredDelegatingExecutor(boundedMultipartUploadThreadPool, blockOutputActiveBlocks, true), true), + null,objectLen); synchronized (filesBeingWritten) { filesBeingWritten.put(key, outputStream); } endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.APPEND, - true, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.APPEND, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return outputStream; @@ -1281,9 +1098,8 @@ public boolean truncate(Path f, long newLength) throws IOException { } if (newLength < 0) { - throw new HadoopIllegalArgumentException( - "Cannot truncate " + f + " to a negative file size: " - + newLength + "."); + throw new IOException(new HadoopIllegalArgumentException( + "Cannot truncate " + f + " to a negative file size: " + newLength + ".")); } FileStatus status; @@ -1301,8 +1117,7 @@ public boolean truncate(Path f, long newLength) throws IOException { if (isFileBeingWritten(key)) { // AlreadyBeingCreatedException (on HDFS NameNode) is transformed // into IOException (on HDFS Client) - throw new AlreadyBeingCreatedException( - "Cannot truncate " + f + " that is being written."); + throw new AlreadyBeingCreatedException("Cannot truncate " + f + " that is being written."); } // Truncate length check. @@ -1311,10 +1126,9 @@ public boolean truncate(Path f, long newLength) throws IOException { return true; } if (oldLength < newLength) { - throw new HadoopIllegalArgumentException( - "Cannot truncate " + f - + " to a larger file size. Current size: " + oldLength - + ", truncate size: " + newLength + "."); + throw new IOException(new HadoopIllegalArgumentException( + "Cannot truncate " + f + " to a larger file size. Current size: " + oldLength + ", truncate size: " + + newLength + ".")); } OBSPosixBucketUtils.innerFsTruncateWithRetry(this, f, newLength); @@ -1356,26 +1170,20 @@ public boolean rename(final Path src, final Path dst) throws IOException { LOG.debug("Rename path {} to {} start", src, dst); try { if (enablePosix) { - boolean success = OBSPosixBucketUtils.renameBasedOnPosix(this, - src, dst); + boolean success = OBSPosixBucketUtils.renameBasedOnPosix(this, src, dst); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.RENAME, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.RENAME, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return success; } else { - boolean success = - OBSObjectBucketUtils.renameBasedOnObject(this, src, dst); + boolean success = OBSObjectBucketUtils.renameBasedOnObject(this, src, dst); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.RENAME, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.RENAME, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return success; @@ -1383,21 +1191,16 @@ public boolean rename(final Path src, final Path dst) throws IOException { } catch (ObsException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.RENAME, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.RENAME, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } - throw OBSCommonUtils.translateException( - "rename(" + src + ", " + dst + ")", src, e); + throw OBSCommonUtils.translateException("rename(" + src + ", " + dst + ")", src, e); } catch (RenameFailedException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.RENAME, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.RENAME, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } LOG.error(e.getMessage()); @@ -1405,19 +1208,15 @@ public boolean rename(final Path src, final Path dst) throws IOException { } catch (FileNotFoundException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.RENAME, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.RENAME, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } LOG.error(e.toString()); return false; } finally { endTime = System.currentTimeMillis(); - LOG.debug( - "Rename path {} to {} finished, thread:{}, " - + "timeUsedInMilliSec:{}.", src, dst, threadId, + LOG.debug("Rename path {} to {} finished, thread:{}, " + "timeUsedInMilliSec:{}.", src, dst, threadId, endTime - startTime); } } @@ -1499,39 +1298,30 @@ boolean isEnableMultiObjectDelete() { * @throws IOException due to inability to delete a directory or file */ @Override - public boolean delete(final Path f, final boolean recursive) - throws IOException { + public boolean delete(final Path f, final boolean recursive) throws IOException { checkOpen(); long startTime = System.currentTimeMillis(); long endTime; try { - FileStatus status = OBSCommonUtils.innerGetFileStatusWithRetry(this, - f); - LOG.debug("delete: path {} - recursive {}", status.getPath(), - recursive); + FileStatus status = OBSCommonUtils.innerGetFileStatusWithRetry(this, f); + LOG.debug("delete: path {} - recursive {}", status.getPath(), recursive); if (enablePosix) { - boolean success = OBSPosixBucketUtils.fsDelete(this, status, - recursive); + boolean success = OBSPosixBucketUtils.fsDelete(this, status, recursive); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.DELETE, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.DELETE, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return success; } - boolean success = OBSObjectBucketUtils.objectDelete(this, status, - recursive); + boolean success = OBSObjectBucketUtils.objectDelete(this, status, recursive); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.DELETE, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.DELETE, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return success; @@ -1539,30 +1329,24 @@ public boolean delete(final Path f, final boolean recursive) LOG.warn("Couldn't delete {} - does not exist", f); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.DELETE, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.DELETE, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return false; } catch (FileConflictException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.DELETE, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.DELETE, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } throw new AccessControlException(e); } catch (ObsException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.DELETE, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.DELETE, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } throw OBSCommonUtils.translateException("delete", f, e); @@ -1597,36 +1381,28 @@ String getTrashDir() { * @throws IOException see specific implementation */ @Override - public FileStatus[] listStatus(final Path f) - throws FileNotFoundException, IOException { + public FileStatus[] listStatus(final Path f) throws FileNotFoundException, IOException { checkOpen(); long startTime = System.currentTimeMillis(); long threadId = Thread.currentThread().getId(); long endTime; try { - FileStatus[] statuses = OBSCommonUtils.innerListStatus(this, f, - false); + FileStatus[] statuses = OBSCommonUtils.innerListStatus(this, f, false); endTime = System.currentTimeMillis(); - LOG.debug( - "List status for path:{}, thread:{}, timeUsedInMilliSec:{}", f, - threadId, endTime - startTime); + LOG.debug("List status for path:{}, thread:{}, timeUsedInMilliSec:{}", f, threadId, endTime - startTime); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.NONRECURSIVE, - BasicMetricsConsumer.MetricRecord.LIST_STATUS, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.NONRECURSIVE, BasicMetricsConsumer.MetricRecord.LIST_STATUS, true, + endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return statuses; } catch (ObsException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.NONRECURSIVE, - BasicMetricsConsumer.MetricRecord.LIST_STATUS, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.NONRECURSIVE, BasicMetricsConsumer.MetricRecord.LIST_STATUS, + false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } throw OBSCommonUtils.translateException("listStatus", f, e); @@ -1645,41 +1421,32 @@ public FileStatus[] listStatus(final Path f) * @throws FileNotFoundException when the path does not exist * @throws IOException see specific implementation */ - public FileStatus[] listStatus(final Path f, final boolean recursive) - throws FileNotFoundException, IOException { + public FileStatus[] listStatus(final Path f, final boolean recursive) throws FileNotFoundException, IOException { checkOpen(); long startTime = System.currentTimeMillis(); long threadId = Thread.currentThread().getId(); long endTime; try { - FileStatus[] statuses = OBSCommonUtils.innerListStatus(this, f, - recursive); + FileStatus[] statuses = OBSCommonUtils.innerListStatus(this, f, recursive); endTime = System.currentTimeMillis(); - LOG.debug( - "List status for path:{}, thread:{}, timeUsedInMilliSec:{}", f, - threadId, endTime - startTime); + LOG.debug("List status for path:{}, thread:{}, timeUsedInMilliSec:{}", f, threadId, endTime - startTime); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.RECURSIVE, - BasicMetricsConsumer.MetricRecord.LIST_STATUS, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.RECURSIVE, BasicMetricsConsumer.MetricRecord.LIST_STATUS, true, + endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return statuses; } catch (ObsException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.RECURSIVE, - BasicMetricsConsumer.MetricRecord.LIST_STATUS, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.RECURSIVE, BasicMetricsConsumer.MetricRecord.LIST_STATUS, false, + endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } throw OBSCommonUtils.translateException( - "listStatus with recursive flag[" - + (recursive ? "true] " : "false] "), f, e); + "listStatus with recursive flag[" + (recursive ? "true] " : "false] "), f, e); } } @@ -1712,8 +1479,7 @@ public Path getWorkingDirectory() { public void setWorkingDirectory(final Path newDir) { String result = fixRelativePart(newDir).toUri().getPath(); if (!OBSCommonUtils.isValidName(result)) { - throw new IllegalArgumentException( - "Invalid directory name " + result); + throw new IllegalArgumentException("Invalid directory name " + result); } workingDir = fixRelativePart(newDir); } @@ -1748,20 +1514,16 @@ public boolean mkdirs(final Path path, final FsPermission permission) boolean success = OBSCommonUtils.innerMkdirs(this, path); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.MKDIRS, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.MKDIRS, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return success; } catch (ObsException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.MKDIRS, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.MKDIRS, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } throw OBSCommonUtils.translateException("mkdirs", path, e); @@ -1777,30 +1539,24 @@ public boolean mkdirs(final Path path, final FsPermission permission) * @throws IOException on other problems */ @Override - public FileStatus getFileStatus(final Path f) - throws FileNotFoundException, IOException { + public FileStatus getFileStatus(final Path f) throws FileNotFoundException, IOException { checkOpen(); long startTime = System.currentTimeMillis(); long endTime; try { - FileStatus fileStatus = OBSCommonUtils.innerGetFileStatusWithRetry( - this, f); + FileStatus fileStatus = OBSCommonUtils.innerGetFileStatusWithRetry(this, f); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.GET_FILE_STATUS, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.GET_FILE_STATUS, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return fileStatus; } catch (FileConflictException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.GET_FILE_STATUS, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.GET_FILE_STATUS, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } // For super user, convert AccessControlException @@ -1836,8 +1592,7 @@ OBSFileStatus innerGetFileStatus(final Path f) throws IOException { * @throws IOException IO failure */ @Override - public ContentSummary getContentSummary(final Path f) - throws FileNotFoundException, IOException { + public ContentSummary getContentSummary(final Path f) throws FileNotFoundException, IOException { checkOpen(); long startTime = System.currentTimeMillis(); long endTime; @@ -1852,11 +1607,8 @@ public ContentSummary getContentSummary(final Path f) } catch (FileConflictException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.GET_CONTENT_SUMMARY, - false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.GET_CONTENT_SUMMARY, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } throw new AccessControlException(e); @@ -1872,11 +1624,8 @@ public ContentSummary getContentSummary(final Path f) .build(); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.GET_CONTENT_SUMMARY, - true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.GET_CONTENT_SUMMARY, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } @@ -1885,28 +1634,20 @@ public ContentSummary getContentSummary(final Path f) // f is a directory if (enablePosix) { - contentSummary = OBSPosixBucketUtils.fsGetDirectoryContentSummary( - this, OBSCommonUtils.pathToKey(this, f)); + contentSummary = OBSPosixBucketUtils.fsGetDirectoryContentSummary(this, OBSCommonUtils.pathToKey(this, f)); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.GET_CONTENT_SUMMARY, - true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.GET_CONTENT_SUMMARY, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return contentSummary; } else { - contentSummary = OBSObjectBucketUtils.getDirectoryContentSummary( - this, OBSCommonUtils.pathToKey(this, f)); + contentSummary = OBSObjectBucketUtils.getDirectoryContentSummary(this, OBSCommonUtils.pathToKey(this, f)); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.GET_CONTENT_SUMMARY, - true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.GET_CONTENT_SUMMARY, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return contentSummary; @@ -1926,9 +1667,8 @@ public ContentSummary getContentSummary(final Path f) * @throws IOException IO problem */ @Override - public void copyFromLocalFile(final boolean delSrc, final boolean overwrite, - final Path src, final Path dst) throws FileAlreadyExistsException, - IOException { + public void copyFromLocalFile(final boolean delSrc, final boolean overwrite, final Path src, final Path dst) + throws FileAlreadyExistsException, IOException { checkOpen(); long startTime = System.currentTimeMillis(); long endTime; @@ -1936,23 +1676,18 @@ public void copyFromLocalFile(final boolean delSrc, final boolean overwrite, super.copyFromLocalFile(delSrc, overwrite, src, dst); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.COPYFROMLOCAL, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.COPYFROMLOCAL, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } } catch (ObsException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.COPYFROMLOCAL, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.COPYFROMLOCAL, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } - throw OBSCommonUtils.translateException( - "copyFromLocalFile(" + src + ", " + dst + ")", src, e); + throw OBSCommonUtils.translateException("copyFromLocalFile(" + src + ", " + dst + ")", src, e); } } @@ -1972,11 +1707,9 @@ public void close() throws IOException { closed = true; long endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.FS, - BasicMetricsConsumer.MetricRecord.CLOSE, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.FS, BasicMetricsConsumer.MetricRecord.CLOSE, true, + endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } @@ -1987,12 +1720,8 @@ public void close() throws IOException { } obs.close(); } finally { - OBSCommonUtils.shutdownAll( - boundedMultipartUploadThreadPool, - boundedCopyThreadPool, - boundedDeleteThreadPool, - boundedCopyPartThreadPool, - boundedListThreadPool); + OBSCommonUtils.shutdownAll(boundedMultipartUploadThreadPool, boundedCopyThreadPool, boundedDeleteThreadPool, + boundedCopyPartThreadPool, boundedListThreadPool); } LOG.info("Finish closing filesystem instance for uri: {}", uri); @@ -2074,8 +1803,7 @@ public String toString() { sb.append("uri=").append(uri); sb.append(", workingDir=").append(workingDir); sb.append(", partSize=").append(partSize); - sb.append(", enableMultiObjectsDelete=") - .append(enableMultiObjectDelete); + sb.append(", enableMultiObjectsDelete=").append(enableMultiObjectDelete); sb.append(", maxKeys=").append(maxKeys); if (cannedACL != null) { sb.append(", cannedACL=").append(cannedACL.toString()); @@ -2085,8 +1813,7 @@ public String toString() { if (blockFactory != null) { sb.append(", blockFactory=").append(blockFactory); } - sb.append(", boundedMultipartUploadThreadPool=") - .append(boundedMultipartUploadThreadPool); + sb.append(", boundedMultipartUploadThreadPool=").append(boundedMultipartUploadThreadPool); sb.append(", statistics {").append(statistics).append("}"); sb.append(", metrics {").append("}"); sb.append('}'); @@ -2123,8 +1850,7 @@ int getMaxKeys() { * @throws IOException if any I/O error occurred */ @Override - public RemoteIterator listFiles(final Path f, - final boolean recursive) + public RemoteIterator listFiles(final Path f, final boolean recursive) throws FileNotFoundException, IOException { checkOpen(); long startTime = System.currentTimeMillis(); @@ -2137,61 +1863,45 @@ public RemoteIterator listFiles(final Path f, // lookup dir triggers existence check final FileStatus fileStatus; try { - fileStatus = OBSCommonUtils.innerGetFileStatusWithRetry(this, - path); + fileStatus = OBSCommonUtils.innerGetFileStatusWithRetry(this, path); } catch (FileConflictException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.LIST_FILES, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.LIST_FILES, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } throw new AccessControlException(e); } if (fileStatus.isFile()) { - locatedFileStatus = new OBSListing - .SingleStatusRemoteIterator( + locatedFileStatus = new OBSListing.SingleStatusRemoteIterator( OBSCommonUtils.toLocatedFileStatus(this, fileStatus)); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.LIST_FILES, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.LIST_FILES, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } // simple case: File LOG.debug("Path is a file"); return locatedFileStatus; } else { - LOG.debug( - "listFiles: doing listFiles of directory {} - recursive {}", - path, recursive); + LOG.debug("listFiles: doing listFiles of directory {} - recursive {}", path, recursive); // directory: do a bulk operation - String key = OBSCommonUtils.maybeAddTrailingSlash( - OBSCommonUtils.pathToKey(this, path)); + String key = OBSCommonUtils.maybeAddTrailingSlash(OBSCommonUtils.pathToKey(this, path)); String delimiter = recursive ? null : "/"; - LOG.debug("Requesting all entries under {} with delimiter '{}'", - key, delimiter); - locatedFileStatus = - obsListing.createLocatedFileStatusIterator( - obsListing.createFileStatusListingIterator( - path, - OBSCommonUtils.createListObjectsRequest(this, key, - delimiter), - org.apache.hadoop.fs.obs.OBSListing.ACCEPT_ALL, - new OBSListing.AcceptFilesOnly(path))); + LOG.debug("Requesting all entries under {} with delimiter '{}'", key, delimiter); + locatedFileStatus = obsListing.createLocatedFileStatusIterator( + obsListing.createFileStatusListingIterator(path, + OBSCommonUtils.createListObjectsRequest(this, key, delimiter), + org.apache.hadoop.fs.obs.OBSListing.ACCEPT_ALL, new OBSListing.AcceptFilesOnly(path))); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.LIST_FILES, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.LIST_FILES, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return locatedFileStatus; @@ -2199,10 +1909,8 @@ public RemoteIterator listFiles(final Path f, } catch (ObsException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.LIST_FILES, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.LIST_FILES, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } throw OBSCommonUtils.translateException("listFiles", path, e); @@ -2223,11 +1931,9 @@ public RemoteIterator listFiles(final Path f, * @throws IOException If an I/O error occurred */ @Override - public RemoteIterator listLocatedStatus(final Path f) - throws FileNotFoundException, IOException { + public RemoteIterator listLocatedStatus(final Path f) throws FileNotFoundException, IOException { checkOpen(); - return listLocatedStatus(f, - org.apache.hadoop.fs.obs.OBSListing.ACCEPT_ALL); + return listLocatedStatus(f, org.apache.hadoop.fs.obs.OBSListing.ACCEPT_ALL); } /** @@ -2242,8 +1948,7 @@ public RemoteIterator listLocatedStatus(final Path f) * @throws IOException if any I/O error occurred */ @Override - public RemoteIterator listLocatedStatus(final Path f, - final PathFilter filter) + public RemoteIterator listLocatedStatus(final Path f, final PathFilter filter) throws FileNotFoundException, IOException { checkOpen(); Path path = OBSCommonUtils.qualify(this, f); @@ -2255,16 +1960,12 @@ public RemoteIterator listLocatedStatus(final Path f, // lookup dir triggers existence check final FileStatus fileStatus; try { - fileStatus = OBSCommonUtils.innerGetFileStatusWithRetry(this, - path); + fileStatus = OBSCommonUtils.innerGetFileStatusWithRetry(this, path); } catch (FileConflictException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.LIST_LOCATED_STS, - false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.LIST_LOCATED_STS, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } throw new AccessControlException(e); @@ -2273,38 +1974,26 @@ public RemoteIterator listLocatedStatus(final Path f, if (fileStatus.isFile()) { // simple case: File LOG.debug("Path is a file"); - locatedFileStatusRemoteList = - new OBSListing.SingleStatusRemoteIterator( - filter.accept(path) - ? OBSCommonUtils.toLocatedFileStatus(this, - fileStatus) : null); + locatedFileStatusRemoteList = new OBSListing.SingleStatusRemoteIterator( + filter.accept(path) ? OBSCommonUtils.toLocatedFileStatus(this, fileStatus) : null); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.LIST_LOCATED_STS, - true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.LIST_LOCATED_STS, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return locatedFileStatusRemoteList; } else { // directory: trigger a lookup - String key = OBSCommonUtils.maybeAddTrailingSlash( - OBSCommonUtils.pathToKey(this, path)); - locatedFileStatusRemoteList = - obsListing.createLocatedFileStatusIterator( - obsListing.createFileStatusListingIterator(path, - OBSCommonUtils.createListObjectsRequest( - this, key, "/"), filter, - new OBSListing.AcceptAllButSelfAndS3nDirs(path))); + String key = OBSCommonUtils.maybeAddTrailingSlash(OBSCommonUtils.pathToKey(this, path)); + locatedFileStatusRemoteList = obsListing.createLocatedFileStatusIterator( + obsListing.createFileStatusListingIterator(path, + OBSCommonUtils.createListObjectsRequest(this, key, "/"), filter, + new OBSListing.AcceptAllButSelfAndS3nDirs(path))); endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.LIST_LOCATED_STS, - true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.LIST_LOCATED_STS, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } return locatedFileStatusRemoteList; @@ -2312,15 +2001,11 @@ public RemoteIterator listLocatedStatus(final Path f, } catch (ObsException e) { endTime = System.currentTimeMillis(); if (getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord(null, - BasicMetricsConsumer.MetricRecord.LIST_LOCATED_STS, - false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.LIST_LOCATED_STS, false, endTime - startTime); OBSCommonUtils.setMetricsInfo(this, record); } - throw OBSCommonUtils.translateException("listLocatedStatus", path, - e); + throw OBSCommonUtils.translateException("listLocatedStatus", path, e); } } @@ -2329,7 +2014,7 @@ public RemoteIterator listLocatedStatus(final Path f, * * @return the server-side encryption wrapper */ - SseWrapper getSse() { + public SseWrapper getSse() { return sse; } @@ -2338,7 +2023,7 @@ void setBucketPolicy(String policy) { obs.setBucketPolicy(bucket, policy); } - void checkOpen() throws IOException { + public void checkOpen() throws IOException { if (closed) { throw new IOException("OBSFilesystem closed"); } @@ -2348,7 +2033,7 @@ BasicMetricsConsumer getMetricsConsumer() { return metricsConsumer; } - boolean getMetricSwitch() { + public boolean getMetricSwitch() { return metricSwitch; } diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFsDFSListing.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFsDFSListing.java index 36b7ef2..39abc1b 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFsDFSListing.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSFsDFSListing.java @@ -25,12 +25,9 @@ class OBSFsDFSListing extends ObjectListing { /** * Class logger. */ - private static final Logger LOG = LoggerFactory.getLogger( - OBSFsDFSListing.class); + private static final Logger LOG = LoggerFactory.getLogger(OBSFsDFSListing.class); - static void increaseLevelStats(final List levelStatsList, - final int level, - final boolean isDir) { + static void increaseLevelStats(final List levelStatsList, final int level, final boolean isDir) { int currMaxLevel = levelStatsList.size() - 1; if (currMaxLevel < level) { for (int i = 0; i < level - currMaxLevel; i++) { @@ -45,170 +42,124 @@ static void increaseLevelStats(final List levelStatsList, } } - static String fsDFSListNextBatch(final OBSFileSystem owner, - final Stack listStack, - final Queue resultQueue, - final String marker, - final int maxKeyNum, - final List objectSummaries, - final List levelStatsList) throws IOException { + static String fsDFSListNextBatch(final OBSFileSystem owner, final Stack listStack, + final Queue resultQueue, final String marker, final int maxKeyNum, + final List objectSummaries, final List levelStatsList) throws IOException { // 0. check if marker matches with the peek of result queue when marker // is given if (marker != null) { if (resultQueue.isEmpty()) { + throw new IllegalArgumentException("result queue is empty, but marker is not empty: " + marker); + } else if (resultQueue.peek().getType() == ListEntityType.LIST_TAIL) { + throw new RuntimeException("cannot put list tail (" + resultQueue.peek() + ") into result queue"); + } else if (!marker.equals(resultQueue.peek().getType() == ListEntityType.COMMON_PREFIX + ? resultQueue.peek().getCommonPrefix() + : resultQueue.peek().getObjectSummary().getObjectKey())) { throw new IllegalArgumentException( - "result queue is empty, but marker is not empty: " - + marker); - } else if (resultQueue.peek().getType() - == ListEntityType.LIST_TAIL) { - throw new RuntimeException( - "cannot put list tail (" + resultQueue.peek() - + ") into result queue"); - } else if (!marker.equals( - resultQueue.peek().getType() == ListEntityType.COMMON_PREFIX - ? resultQueue.peek().getCommonPrefix() - : resultQueue.peek().getObjectSummary().getObjectKey())) { - throw new IllegalArgumentException("marker (" + marker - + ") does not match with result queue peek (" - + resultQueue.peek() + ")"); + "marker (" + marker + ") does not match with result queue peek (" + resultQueue.peek() + ")"); } } // 1. fetch some list results from local result queue - int resultNum = fetchListResultLocally(resultQueue, - maxKeyNum, objectSummaries, - levelStatsList); + int resultNum = fetchListResultLocally(resultQueue, maxKeyNum, objectSummaries, levelStatsList); // 2. fetch more list results by doing one-level lists in parallel - fetchListResultRemotely(owner, listStack, resultQueue, maxKeyNum, - objectSummaries, levelStatsList, resultNum); + fetchListResultRemotely(owner, listStack, resultQueue, maxKeyNum, objectSummaries, levelStatsList, resultNum); // 3. check if list operation ends if (!listStack.empty() && resultQueue.isEmpty()) { - throw new RuntimeException( - "result queue is empty, but list stack is not empty: " - + listStack); + throw new RuntimeException("result queue is empty, but list stack is not empty: " + listStack); } String nextMarker = null; if (!resultQueue.isEmpty()) { if (resultQueue.peek().getType() == ListEntityType.LIST_TAIL) { - throw new RuntimeException( - "cannot put list tail (" + resultQueue.peek() - + ") into result queue"); + throw new RuntimeException("cannot put list tail (" + resultQueue.peek() + ") into result queue"); } else { - nextMarker = - resultQueue.peek().getType() == ListEntityType.COMMON_PREFIX - ? resultQueue - .peek().getCommonPrefix() - : resultQueue.peek().getObjectSummary().getObjectKey(); + nextMarker = resultQueue.peek().getType() == ListEntityType.COMMON_PREFIX ? resultQueue.peek() + .getCommonPrefix() : resultQueue.peek().getObjectSummary().getObjectKey(); } } return nextMarker; } - static void fetchListResultRemotely(final OBSFileSystem owner, - final Stack listStack, - final Queue resultQueue, final int maxKeyNum, - final List objectSummaries, - final List levelStatsList, - final int resultNum) throws IOException { + static void fetchListResultRemotely(final OBSFileSystem owner, final Stack listStack, + final Queue resultQueue, final int maxKeyNum, final List objectSummaries, + final List levelStatsList, final int resultNum) throws IOException { int newResultNum = resultNum; - while (!listStack.empty() && (newResultNum < maxKeyNum - || resultQueue.isEmpty())) { + while (!listStack.empty() && (newResultNum < maxKeyNum || resultQueue.isEmpty())) { List oneLevelListRequests = new ArrayList<>(); List> oneLevelListFutures = new ArrayList<>(); List levels = new ArrayList<>(); List oneLevelObjectListings = new ArrayList<>(); // a. submit some one-level list tasks in parallel - submitOneLevelListTasks(owner, listStack, maxKeyNum, - oneLevelListRequests, oneLevelListFutures, levels); + submitOneLevelListTasks(owner, listStack, maxKeyNum, oneLevelListRequests, oneLevelListFutures, levels); // b. wait these tasks to complete - waitForOneLevelListTasksFinished(oneLevelListRequests, - oneLevelListFutures, oneLevelObjectListings); + waitForOneLevelListTasksFinished(oneLevelListRequests, oneLevelListFutures, oneLevelObjectListings); // c. put subdir/file into result commonPrefixes and // objectSummaries;if the number of results reaches maxKeyNum, // cache it into resultQueue for next list batch note: unlike // standard DFS, we put subdir directly into result list to avoid // caching it using more space - newResultNum = handleOneLevelListTaskResult(resultQueue, maxKeyNum, - objectSummaries, levelStatsList, newResultNum, - oneLevelListRequests, levels, oneLevelObjectListings); + newResultNum = handleOneLevelListTaskResult(resultQueue, maxKeyNum, objectSummaries, levelStatsList, + newResultNum, oneLevelListRequests, levels, oneLevelObjectListings); // d. push subdirs and list continuing tail/end into list stack in // reversed order,so that we can pop them from the stack in order // later - addNewListStackEntities(listStack, oneLevelListRequests, levels, - oneLevelObjectListings); + addNewListStackEntities(listStack, oneLevelListRequests, levels, oneLevelObjectListings); } } @SuppressWarnings("checkstyle:ParameterNumber") - static int handleOneLevelListTaskResult(final Queue resultQueue, - final int maxKeyNum, - final List objectSummaries, - final List levelStatsList, - final int resultNum, - final List oneLevelListRequests, - final List levels, + static int handleOneLevelListTaskResult(final Queue resultQueue, final int maxKeyNum, + final List objectSummaries, final List levelStatsList, final int resultNum, + final List oneLevelListRequests, final List levels, final List oneLevelObjectListings) { int newResultNum = resultNum; for (int i = 0; i < oneLevelObjectListings.size(); i++) { - LOG.debug( - "one level listing with prefix=" + oneLevelListRequests.get(i) - .getPrefix() - + ", marker=" + ( - oneLevelListRequests.get(i).getMarker() != null - ? oneLevelListRequests.get(i) - .getMarker() - : "")); + LOG.debug("one level listing with prefix=" + oneLevelListRequests.get(i).getPrefix() + ", marker=" + ( + oneLevelListRequests.get(i).getMarker() != null + ? oneLevelListRequests.get(i).getMarker() + : "")); ObjectListing oneLevelObjectListing = oneLevelObjectListings.get(i); - LOG.debug("# of CommonPrefixes/Objects: {}/{}", - oneLevelObjectListing.getCommonPrefixes().size(), + LOG.debug("# of CommonPrefixes/Objects: {}/{}", oneLevelObjectListing.getCommonPrefixes().size(), oneLevelObjectListing.getObjects().size()); - if (oneLevelObjectListing.getCommonPrefixes().isEmpty() - && oneLevelObjectListing.getObjects().isEmpty()) { + if (oneLevelObjectListing.getCommonPrefixes().isEmpty() && oneLevelObjectListing.getObjects().isEmpty()) { continue; } - for (ObsObject extenedCommonPrefixes - : oneLevelObjectListing.getExtenedCommonPrefixes()) { - if (extenedCommonPrefixes.getObjectKey().equals( - oneLevelListRequests.get(i).getPrefix())) { + for (ObsObject extenedCommonPrefixes : oneLevelObjectListing.getExtenedCommonPrefixes()) { + if (extenedCommonPrefixes.getObjectKey().equals(oneLevelListRequests.get(i).getPrefix())) { // skip prefix itself continue; } - LOG.debug( - "common prefix: " + extenedCommonPrefixes.getObjectKey()); + LOG.debug("common prefix: " + extenedCommonPrefixes.getObjectKey()); extenedCommonPrefixes.getMetadata().setContentLength(0L); if (newResultNum < maxKeyNum) { objectSummaries.add(extenedCommonPrefixes); increaseLevelStats(levelStatsList, levels.get(i), true); newResultNum++; } else { - resultQueue.add( - new ListEntity(extenedCommonPrefixes, levels.get(i))); + resultQueue.add(new ListEntity(extenedCommonPrefixes, levels.get(i))); } } for (ObsObject obj : oneLevelObjectListing.getObjects()) { - if (obj.getObjectKey() - .equals(oneLevelListRequests.get(i).getPrefix())) { + if (obj.getObjectKey().equals(oneLevelListRequests.get(i).getPrefix())) { // skip prefix itself continue; } - LOG.debug("object: {}, size: {}", obj.getObjectKey(), - obj.getMetadata().getContentLength()); + LOG.debug("object: {}, size: {}", obj.getObjectKey(), obj.getMetadata().getContentLength()); if (newResultNum < maxKeyNum) { objectSummaries.add(obj); - increaseLevelStats(levelStatsList, levels.get(i), - obj.getObjectKey().endsWith("/")); + increaseLevelStats(levelStatsList, levels.get(i), obj.getObjectKey().endsWith("/")); newResultNum++; } else { resultQueue.add(new ListEntity(obj, levels.get(i))); @@ -218,118 +169,92 @@ static int handleOneLevelListTaskResult(final Queue resultQueue, return newResultNum; } - static void waitForOneLevelListTasksFinished( - final List oneLevelListRequests, - final List> oneLevelListFutures, - final List oneLevelObjectListings) + static void waitForOneLevelListTasksFinished(final List oneLevelListRequests, + final List> oneLevelListFutures, final List oneLevelObjectListings) throws IOException { for (int i = 0; i < oneLevelListFutures.size(); i++) { try { oneLevelObjectListings.add(oneLevelListFutures.get(i).get()); } catch (InterruptedException e) { - LOG.warn("Interrupted while listing using DFS, prefix=" - + oneLevelListRequests.get(i).getPrefix() + ", marker=" - + (oneLevelListRequests.get(i).getMarker() != null - ? oneLevelListRequests.get(i).getMarker() - : "")); + LOG.warn("Interrupted while listing using DFS, prefix=" + oneLevelListRequests.get(i).getPrefix() + + ", marker=" + (oneLevelListRequests.get(i).getMarker() != null ? oneLevelListRequests.get(i) + .getMarker() : "")); throw new InterruptedIOException( - "Interrupted while listing using DFS, prefix=" - + oneLevelListRequests.get(i).getPrefix() + ", marker=" - + (oneLevelListRequests.get(i).getMarker() != null - ? oneLevelListRequests.get(i).getMarker() - : "")); + "Interrupted while listing using DFS, prefix=" + oneLevelListRequests.get(i).getPrefix() + + ", marker=" + (oneLevelListRequests.get(i).getMarker() != null ? oneLevelListRequests.get(i) + .getMarker() : "")); } catch (ExecutionException e) { - LOG.error("Exception while listing using DFS, prefix=" - + oneLevelListRequests.get(i).getPrefix() + ", marker=" + LOG.error( + "Exception while listing using DFS, prefix=" + oneLevelListRequests.get(i).getPrefix() + ", marker=" + (oneLevelListRequests.get(i).getMarker() != null ? oneLevelListRequests.get(i).getMarker() - : ""), - e); + : ""), e); for (Future future : oneLevelListFutures) { future.cancel(true); } - throw OBSCommonUtils.extractException( - "Listing using DFS with exception, marker=" - + (oneLevelListRequests.get(i).getMarker() != null + throw OBSCommonUtils.extractException("Listing using DFS with exception, marker=" + ( + oneLevelListRequests.get(i).getMarker() != null ? oneLevelListRequests.get(i).getMarker() - : ""), - oneLevelListRequests.get(i).getPrefix(), e); + : ""), oneLevelListRequests.get(i).getPrefix(), e); } } } - static void submitOneLevelListTasks(final OBSFileSystem owner, - final Stack listStack, final int maxKeyNum, - final List oneLevelListRequests, - final List> oneLevelListFutures, - final List levels) { - for (int i = 0; - i < owner.getListParallelFactor() && !listStack.empty(); i++) { + static void submitOneLevelListTasks(final OBSFileSystem owner, final Stack listStack, + final int maxKeyNum, final List oneLevelListRequests, + final List> oneLevelListFutures, final List levels) { + for (int i = 0; i < owner.getListParallelFactor() && !listStack.empty(); i++) { ListEntity listEntity = listStack.pop(); if (listEntity.getType() == ListEntityType.LIST_TAIL) { if (listEntity.getNextMarker() != null) { - ListObjectsRequest oneLevelListRequest - = new ListObjectsRequest(); + ListObjectsRequest oneLevelListRequest = new ListObjectsRequest(); oneLevelListRequest.setBucketName(owner.getBucket()); oneLevelListRequest.setPrefix(listEntity.getPrefix()); oneLevelListRequest.setMarker(listEntity.getNextMarker()); - oneLevelListRequest.setMaxKeys( - Math.min(maxKeyNum, owner.getMaxKeys())); + oneLevelListRequest.setMaxKeys(Math.min(maxKeyNum, owner.getMaxKeys())); oneLevelListRequest.setDelimiter("/"); oneLevelListRequests.add(oneLevelListRequest); oneLevelListFutures.add(owner.getBoundedListThreadPool() - .submit(() -> OBSCommonUtils.commonContinueListObjects( - owner, oneLevelListRequest))); + .submit(() -> OBSCommonUtils.commonContinueListObjects(owner, oneLevelListRequest))); levels.add(listEntity.getLevel()); } // avoid adding list tasks in different levels later break; } else { - String oneLevelListPrefix = - listEntity.getType() == ListEntityType.COMMON_PREFIX - ? listEntity.getCommonPrefix() - : listEntity.getObjectSummary().getObjectKey(); - ListObjectsRequest oneLevelListRequest = OBSCommonUtils - .createListObjectsRequest(owner, oneLevelListPrefix, "/", - maxKeyNum); + String oneLevelListPrefix = listEntity.getType() == ListEntityType.COMMON_PREFIX + ? listEntity.getCommonPrefix() + : listEntity.getObjectSummary().getObjectKey(); + ListObjectsRequest oneLevelListRequest = OBSCommonUtils.createListObjectsRequest(owner, + oneLevelListPrefix, "/", maxKeyNum); oneLevelListRequests.add(oneLevelListRequest); oneLevelListFutures.add(owner.getBoundedListThreadPool() - .submit(() -> OBSCommonUtils.commonListObjects(owner, - oneLevelListRequest))); + .submit(() -> OBSCommonUtils.commonListObjects(owner, oneLevelListRequest))); levels.add(listEntity.getLevel() + 1); } } } static void addNewListStackEntities(final Stack listStack, - final List oneLevelListRequests, - final List levels, + final List oneLevelListRequests, final List levels, final List oneLevelObjectListings) { for (int i = oneLevelObjectListings.size() - 1; i >= 0; i--) { ObjectListing oneLevelObjectListing = oneLevelObjectListings.get(i); - if (oneLevelObjectListing.getCommonPrefixes().isEmpty() - && oneLevelObjectListing.getObjects() - .isEmpty()) { + if (oneLevelObjectListing.getCommonPrefixes().isEmpty() && oneLevelObjectListing.getObjects().isEmpty()) { continue; } listStack.push(new ListEntity(oneLevelObjectListing.getPrefix(), - oneLevelObjectListing.isTruncated() - ? oneLevelObjectListing.getNextMarker() - : null, - levels.get(i))); + oneLevelObjectListing.isTruncated() ? oneLevelObjectListing.getNextMarker() : null, levels.get(i))); - ListIterator commonPrefixListIterator - = oneLevelObjectListing.getCommonPrefixes() + ListIterator commonPrefixListIterator = oneLevelObjectListing.getCommonPrefixes() .listIterator(oneLevelObjectListing.getCommonPrefixes().size()); while (commonPrefixListIterator.hasPrevious()) { String commonPrefix = commonPrefixListIterator.previous(); - if (commonPrefix.equals( - oneLevelListRequests.get(i).getPrefix())) { + if (commonPrefix.equals(oneLevelListRequests.get(i).getPrefix())) { // skip prefix itself continue; } @@ -337,40 +262,32 @@ static void addNewListStackEntities(final Stack listStack, listStack.push(new ListEntity(commonPrefix, levels.get(i))); } - ListIterator objectSummaryListIterator - = oneLevelObjectListing.getObjects() + ListIterator objectSummaryListIterator = oneLevelObjectListing.getObjects() .listIterator(oneLevelObjectListing.getObjects().size()); while (objectSummaryListIterator.hasPrevious()) { ObsObject objectSummary = objectSummaryListIterator.previous(); - if (objectSummary.getObjectKey() - .equals(oneLevelListRequests.get(i).getPrefix())) { + if (objectSummary.getObjectKey().equals(oneLevelListRequests.get(i).getPrefix())) { // skip prefix itself continue; } if (objectSummary.getObjectKey().endsWith("/")) { - listStack.push( - new ListEntity(objectSummary, levels.get(i))); + listStack.push(new ListEntity(objectSummary, levels.get(i))); } } } } - static int fetchListResultLocally(final Queue resultQueue, - final int maxKeyNum, - final List objectSummaries, - final List levelStatsList) { + static int fetchListResultLocally(final Queue resultQueue, final int maxKeyNum, + final List objectSummaries, final List levelStatsList) { int resultNum = 0; while (!resultQueue.isEmpty() && resultNum < maxKeyNum) { ListEntity listEntity = resultQueue.poll(); if (listEntity.getType() == ListEntityType.LIST_TAIL) { - throw new RuntimeException("cannot put list tail (" + listEntity - + ") into result queue"); + throw new RuntimeException("cannot put list tail (" + listEntity + ") into result queue"); } else if (listEntity.getType() == ListEntityType.COMMON_PREFIX) { - throw new RuntimeException( - "cannot put common prefix (" + listEntity - + ") into result queue"); + throw new RuntimeException("cannot put common prefix (" + listEntity + ") into result queue"); } else { objectSummaries.add(listEntity.getObjectSummary()); increaseLevelStats(levelStatsList, listEntity.getLevel(), @@ -381,20 +298,18 @@ static int fetchListResultLocally(final Queue resultQueue, return resultNum; } - static OBSFsDFSListing fsDFSListObjects(final OBSFileSystem owner, - final ListObjectsRequest request) throws IOException { + static OBSFsDFSListing fsDFSListObjects(final OBSFileSystem owner, final ListObjectsRequest request) + throws IOException { List objectSummaries = new ArrayList<>(); List commonPrefixes = new ArrayList<>(); String bucketName = owner.getBucket(); String prefix = request.getPrefix(); int maxKeyNum = request.getMaxKeys(); if (request.getDelimiter() != null) { - throw new IllegalArgumentException( - "illegal delimiter: " + request.getDelimiter()); + throw new IllegalArgumentException("illegal delimiter: " + request.getDelimiter()); } if (request.getMarker() != null) { - throw new IllegalArgumentException( - "illegal marker: " + request.getMarker()); + throw new IllegalArgumentException("illegal marker: " + request.getMarker()); } Stack listStack = new Stack<>(); @@ -404,14 +319,16 @@ static OBSFsDFSListing fsDFSListObjects(final OBSFileSystem owner, listStack.push(new ListEntity(prefix, 0)); increaseLevelStats(levelStatsList, 0, true); - String nextMarker = fsDFSListNextBatch(owner, listStack, resultQueue, - null, maxKeyNum, objectSummaries, + String nextMarker = fsDFSListNextBatch(owner, listStack, resultQueue, null, maxKeyNum, objectSummaries, levelStatsList); if (nextMarker == null) { StringBuilder levelStatsStringBuilder = new StringBuilder(); - levelStatsStringBuilder.append("bucketName=").append(bucketName) - .append(", prefix=").append(prefix).append(": "); + levelStatsStringBuilder.append("bucketName=") + .append(bucketName) + .append(", prefix=") + .append(prefix) + .append(": "); for (LevelStats levelStats : levelStatsList) { levelStatsStringBuilder.append("level=") .append(levelStats.getLevel()) @@ -421,21 +338,14 @@ static OBSFsDFSListing fsDFSListObjects(final OBSFileSystem owner, .append(levelStats.getFileNum()) .append("; "); } - LOG.debug("[list level statistics info] " - + levelStatsStringBuilder.toString()); + LOG.debug("[list level statistics info] " + levelStatsStringBuilder.toString()); } - return new OBSFsDFSListing(request, - objectSummaries, - commonPrefixes, - nextMarker, - listStack, - resultQueue, + return new OBSFsDFSListing(request, objectSummaries, commonPrefixes, nextMarker, listStack, resultQueue, levelStatsList); } - static OBSFsDFSListing fsDFSContinueListObjects(final OBSFileSystem owner, - final OBSFsDFSListing obsFsDFSListing) + static OBSFsDFSListing fsDFSContinueListObjects(final OBSFileSystem owner, final OBSFsDFSListing obsFsDFSListing) throws IOException { List objectSummaries = new ArrayList<>(); List commonPrefixes = new ArrayList<>(); @@ -444,22 +354,23 @@ static OBSFsDFSListing fsDFSContinueListObjects(final OBSFileSystem owner, String marker = obsFsDFSListing.getNextMarker(); int maxKeyNum = obsFsDFSListing.getMaxKeys(); if (obsFsDFSListing.getDelimiter() != null) { - throw new IllegalArgumentException( - "illegal delimiter: " + obsFsDFSListing.getDelimiter()); + throw new IllegalArgumentException("illegal delimiter: " + obsFsDFSListing.getDelimiter()); } Stack listStack = obsFsDFSListing.getListStack(); Queue resultQueue = obsFsDFSListing.getResultQueue(); List levelStatsList = obsFsDFSListing.getLevelStatsList(); - String nextMarker = fsDFSListNextBatch(owner, listStack, resultQueue, - marker, maxKeyNum, objectSummaries, + String nextMarker = fsDFSListNextBatch(owner, listStack, resultQueue, marker, maxKeyNum, objectSummaries, levelStatsList); if (nextMarker == null) { StringBuilder levelStatsStringBuilder = new StringBuilder(); - levelStatsStringBuilder.append("bucketName=").append(bucketName) - .append(", prefix=").append(prefix).append(": "); + levelStatsStringBuilder.append("bucketName=") + .append(bucketName) + .append(", prefix=") + .append(prefix) + .append(": "); for (LevelStats levelStats : levelStatsList) { levelStatsStringBuilder.append("level=") .append(levelStats.getLevel()) @@ -469,16 +380,10 @@ static OBSFsDFSListing fsDFSContinueListObjects(final OBSFileSystem owner, .append(levelStats.getFileNum()) .append("; "); } - LOG.debug("[list level statistics info] " - + levelStatsStringBuilder.toString()); + LOG.debug("[list level statistics info] " + levelStatsStringBuilder.toString()); } - return new OBSFsDFSListing(obsFsDFSListing, - objectSummaries, - commonPrefixes, - nextMarker, - listStack, - resultQueue, + return new OBSFsDFSListing(obsFsDFSListing, objectSummaries, commonPrefixes, nextMarker, listStack, resultQueue, levelStatsList); } @@ -546,8 +451,7 @@ static class ListEntity { this.level = entityLevel; } - ListEntity(final String pf, final String nextMk, - final int entityLevel) { + ListEntity(final String pf, final String nextMk, final int entityLevel) { this.type = ListEntityType.LIST_TAIL; this.prefix = pf; this.nextMarker = nextMk; @@ -580,15 +484,10 @@ String getNextMarker() { @Override public String toString() { - return "type: " + type - + ", commonPrefix: " + (commonPrefix != null - ? commonPrefix - : "") - + ", objectSummary: " + (objectSummary != null - ? objectSummary - : "") - + ", prefix: " + (prefix != null ? prefix : "") - + ", nextMarker: " + (nextMarker != null ? nextMarker : ""); + return "type: " + type + ", commonPrefix: " + (commonPrefix != null ? commonPrefix : "") + + ", objectSummary: " + (objectSummary != null ? objectSummary : "") + ", prefix: " + (prefix != null + ? prefix + : "") + ", nextMarker: " + (nextMarker != null ? nextMarker : ""); } } @@ -653,45 +552,22 @@ long getFileNum() { */ private List levelStatsList; - OBSFsDFSListing(final ListObjectsRequest request, - final List objectSummaries, - final List commonPrefixes, - final String nextMarker, - final Stack listEntityStack, - final Queue listEntityQueue, - final List listLevelStats) { - super(objectSummaries, - commonPrefixes, - request.getBucketName(), - nextMarker != null, - request.getPrefix(), - null, - request.getMaxKeys(), - null, - nextMarker, - null); + OBSFsDFSListing(final ListObjectsRequest request, final List objectSummaries, + final List commonPrefixes, final String nextMarker, final Stack listEntityStack, + final Queue listEntityQueue, final List listLevelStats) { + super(objectSummaries, commonPrefixes, request.getBucketName(), nextMarker != null, request.getPrefix(), null, + request.getMaxKeys(), null, nextMarker, null); this.listStack = listEntityStack; this.resultQueue = listEntityQueue; this.levelStatsList = listLevelStats; } - OBSFsDFSListing(final OBSFsDFSListing obsFsDFSListing, - final List objectSummaries, - final List commonPrefixes, - final String nextMarker, - final Stack listEntityStack, - final Queue listEntityQueue, - final List listLevelStats) { - super(objectSummaries, - commonPrefixes, - obsFsDFSListing.getBucketName(), - nextMarker != null, - obsFsDFSListing.getPrefix(), - obsFsDFSListing.getNextMarker(), - obsFsDFSListing.getMaxKeys(), - null, - nextMarker, - null); + OBSFsDFSListing(final OBSFsDFSListing obsFsDFSListing, final List objectSummaries, + final List commonPrefixes, final String nextMarker, final Stack listEntityStack, + final Queue listEntityQueue, final List listLevelStats) { + super(objectSummaries, commonPrefixes, obsFsDFSListing.getBucketName(), nextMarker != null, + obsFsDFSListing.getPrefix(), obsFsDFSListing.getNextMarker(), obsFsDFSListing.getMaxKeys(), null, + nextMarker, null); this.listStack = listEntityStack; this.resultQueue = listEntityQueue; this.levelStatsList = listLevelStats; diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSHDFSFileSystem.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSHDFSFileSystem.java new file mode 100644 index 0000000..069cd7f --- /dev/null +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSHDFSFileSystem.java @@ -0,0 +1,931 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.obs; + +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.BlockLocation; +import org.apache.hadoop.fs.BlockStoragePolicySpi; +import org.apache.hadoop.fs.ContentSummary; +import org.apache.hadoop.fs.CreateFlag; +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.FileChecksum; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.FsStatus; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Options; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.PathFilter; +import org.apache.hadoop.fs.QuotaUsage; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.StorageType; +import org.apache.hadoop.fs.XAttrSetFlag; +import org.apache.hadoop.fs.permission.AclEntry; +import org.apache.hadoop.fs.permission.AclStatus; +import org.apache.hadoop.fs.permission.FsAction; +import org.apache.hadoop.fs.permission.FsPermission; +import org.apache.hadoop.hdfs.DistributedFileSystem; +import org.apache.hadoop.hdfs.client.HdfsDataOutputStream; +import org.apache.hadoop.util.Progressable; + +import java.io.FileNotFoundException; +import java.io.IOException; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Method; +import java.net.InetSocketAddress; +import java.net.URI; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.Comparator; +import java.util.EnumSet; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.Optional; +import java.util.function.Function; +import java.util.function.Supplier; +import java.util.stream.Collectors; + +public class OBSHDFSFileSystem extends DistributedFileSystem { + public static final Log LOG = LogFactory.getLog(OBSHDFSFileSystem.class); + + private final static String CONFIG_HDFS_PREFIX = "fs.hdfs.mounttable"; + + private final static String CONFIG_HDFS_DEFAULT_MOUNT_TABLE = "default"; + + private final static String CONFIG_HDFS_LINK = "link"; + + public static final String WRAPPERFS_RESERVED = "/.wrapperfs_reserved"; + + private Configuration wrapperConf; + + private Map mountMap; // key is mount point, should not tail with "/" + + private DistributedFileSystem underHDFS; + + @Override + public void setWorkingDirectory(Path path) { + TransferedPath newPath = transferToNewPath(path); + + try { + newPath.getFS().setWorkingDirectory(newPath.toPath()); + } catch (IOException e) { + LOG.error("failed to set working directoy"); + } + } + + private List> getMountList(final Configuration config, final String viewName) { + List> mountList = new ArrayList<>(); + String mountTableName = viewName; + if (mountTableName == null) { + mountTableName = CONFIG_HDFS_DEFAULT_MOUNT_TABLE; + } + final String mountTablePrefix = CONFIG_HDFS_PREFIX + "." + mountTableName + "."; + final String linkPrefix = CONFIG_HDFS_LINK + "."; + for (Map.Entry si : config) { + final String key = si.getKey(); + if (key.startsWith(mountTablePrefix)) { + String src = key.substring(mountTablePrefix.length()); + if (src.startsWith(linkPrefix)) { + String pathPrefix = src.substring(linkPrefix.length()).trim(); + if (pathPrefix.endsWith(Path.SEPARATOR) && !pathPrefix.trim().equals(Path.SEPARATOR)) { + pathPrefix = pathPrefix.substring(0, pathPrefix.length() - 1); + } + mountList.add(new Pair(pathPrefix, si.getValue())); + } + } + } + return mountList; + } + + private List> initMountList(List> mountList) { + List> mappingPaths = new ArrayList<>(); + + // make shorter mount point first, it helps to check subdirectory mount + Collections.sort(mountList, Comparator.comparingInt(p -> p.getKey().length())); + + for (int i = 0; i < mountList.size(); i++) { + String mountTarget = mountList.get(i).getValue(); + String mountPoint = mountList.get(i).getKey(); + URI.create(mountTarget); // just check path format + + boolean conflict = mappingPaths.stream().map(Pair::getKey) + // if mount point is same with previous, ignore current one + // if mount point is subdirectory of another one, ignore subdirectory + .filter(mp -> mountPoint.equals(mp) || mountPoint.startsWith(mp) && mountPoint.substring(mp.length()).startsWith(Path.SEPARATOR)) + .map(mp -> { + LOG.warn("mount point: " + mountPoint + " is ignored by shorted mount point: " + mp); + return mp; + }) // just log print + .findFirst().isPresent(); + if (!conflict) { + mappingPaths.add(mountList.get(i)); + LOG.info(mountList.get(i).getKey() + "->" + mountList.get(i).getValue()); + } + } + return mappingPaths; + } + + //此方法会被反射调用,维护时候请注意不要影响到功能 + private TransferedPath transferToNewPath(Path path) { + String inputPath = path.toUri().getPath(); + + if (inputPath.startsWith(WRAPPERFS_RESERVED)) { + // reserved path rule: /.wrapperfs_reserved///paths + String[] pathComps = inputPath.split("/", 5); + if (pathComps.length >= 4) { + String schema = pathComps[2]; + String authority = pathComps[3]; + if ("null".equals(authority)) { + authority = null; + } + String reservedPath = "/" + pathComps[4]; + + MountInfo mi = findMountWithSchemaAndAuthority(schema, authority); + if (mi != null) { + return new TransferedPath(mi, reservedPath, true); + } else { + MountInfo reservedMountInfo = new MountInfo( + String.format("%s/%s/%s/", WRAPPERFS_RESERVED, schema, authority), "/", () -> underHDFS); + return new TransferedPath(reservedMountInfo, reservedPath, true); + } + } + } + //String mountKey : mountMap.keySet() + for (Map.Entry entry : mountMap.entrySet()) { + if (inputPath.startsWith(entry.getKey())) { + String subPath = inputPath.substring(entry.getKey().length()); + // exact match the mount key or followed by another path component + if (subPath.length() == 0 || subPath.startsWith("/")) { + return new TransferedPath(mountMap.get(entry.getKey()), subPath); + } + } + } + MountInfo originMount = new MountInfo("/", "/", () -> underHDFS); + return new TransferedPath(originMount, inputPath, false); + } + + private MountInfo findMountWithSchemaAndAuthority(String schema, String authority) { + for (MountInfo m : mountMap.values()) { + try { + if (Objects.equals(m.getToFileSystem().getUri().getScheme(), schema) && Objects.equals( + m.getToFileSystem().getUri().getAuthority(), authority)) { + return m; + } + } catch (IOException e) { + LOG.warn("ignore this warn", e); + } + } + return null; + } + + /** + * convert in mount path back to wrapped fs path + * + * @param inMountPath the path under the mount point + * @param mountPointFS + * @return the path present in default fs + */ + private Path toWrappedPath(Path inMountPath, FileSystem mountPointFS) throws IOException { + URI inMountURI = inMountPath.toUri(); + String schema = Optional.ofNullable(inMountURI.getScheme()).orElse(mountPointFS.getScheme()); + String authority = inMountURI.getAuthority(); + String path = inMountURI.getPath(); + + for (MountInfo m : mountMap.values()) { + if (inSameFileSystem(mountPointFS, schema, authority, m) && path.startsWith(m.toPath)) { + // this inMountPath is sub path under the mount point + String subFolder = path.substring(m.toPath.length()); + return this.makeQualified(new Path(new Path(m.fromPath), new Path(subFolder))); + } + } + + // use reserved path for non-mount path under some mount file system + return Path.mergePaths(getReservedPath(schema, authority), new Path(path)); + } + + private Path getReservedPath(String schema, String authority) { + return new Path(String.format(WRAPPERFS_RESERVED + "/%s/%s/", schema, authority)); + } + + private boolean inSameFileSystem(FileSystem mountPointFS, String schema, String authority, MountInfo m) + throws IOException { + return m.getToFileSystem() == mountPointFS + || Objects.equals(m.getToFileSystem().getUri().getScheme(), schema) && Objects.equals( + m.getToFileSystem().getUri().getAuthority(), authority); + } + + private Path transferToWrappedPath(Path path, TransferedPath newPath) { + String currentPath = path.toUri().getPath(); + String fromPrefix = newPath.getMountInfo().getToPath(); // reverse path convert + String targetPrefix = newPath.isReservedPath ? "/" : newPath.getMountInfo().getFromPath(); + + String remainPart; + if (currentPath.startsWith(fromPrefix)) { + remainPart = currentPath.substring(fromPrefix.length()); + if (remainPart.startsWith(Path.SEPARATOR)) { + remainPart = remainPart.substring(1); + } + } else { + // in case the path is not in mount, transfer to wrapped path + try { + remainPart = toWrappedPath(new Path(currentPath), newPath.getFS()).toString(); + } catch (IOException e) { + LOG.warn("failed in toWrappedPath", e); + remainPart = currentPath; + } + } + + Path targetPrefixPath = new Path(getUri().getScheme(), getUri().getAuthority(), targetPrefix); + return new Path(targetPrefixPath, remainPart); + } + + public void initialize(URI theUri, Configuration conf) throws IOException { + this.wrapperConf = new Configuration(conf); + wrapperConf.set("fs.hdfs.impl", DistributedFileSystem.class.getName()); + + super.initialize(theUri, conf); + underHDFS = (DistributedFileSystem) FileSystem.newInstance(theUri, wrapperConf); + + final String authority = theUri.getAuthority(); + mountMap = new HashMap<>(); + for (Pair p : initMountList(getMountList(conf, authority))) { + String fromPath = new Path(p.getKey()).toString(); + Path toRawPath = new Path(p.getValue()); + String toPath = toRawPath.toUri().getPath(); + LOG.info("Initialize mount fs from " + fromPath + " to " + toRawPath); + mountMap.put(p.getKey(), new MountInfo(fromPath, toPath, () -> { + try { + return toRawPath.getFileSystem(wrapperConf); + } catch (IOException e) { + throw new UncheckException(e); + } + })); + } + } + + @Override + public BlockLocation[] getFileBlockLocations(FileStatus file, long start, long len) throws IOException { + TransferedPath newPath = transferToNewPath(file.getPath()); + return newPath.getFS().getFileBlockLocations(newPath.toPath(), start, len); + } + + @Override + public BlockLocation[] getFileBlockLocations(Path p, final long start, final long len) throws IOException { + TransferedPath newPath = transferToNewPath(p); + return newPath.getFS().getFileBlockLocations(newPath.toPath(), start, len); + } + + @Override + public boolean recoverLease(final Path path) throws IOException { + TransferedPath newPath = transferToNewPath(path); + if (newPath.getFS() == underHDFS) { + return underHDFS.recoverLease(path); + } + return true; + } + + @Override + public FSDataInputStream open(Path path, final int i) throws IOException { + TransferedPath newPath = transferToNewPath(path); + return newPath.getFS().open(newPath.toPath(), i); + } + + @Override + public FSDataOutputStream append(Path f, final EnumSet flag, final int bufferSize, + final Progressable progress) throws UnsupportedOperationException, IOException { + TransferedPath newPath = transferToNewPath(f); + return newPath.getFS().append(newPath.toPath(), bufferSize, progress); + } + + @Override + public FSDataOutputStream create(Path path, FsPermission fsPermission, boolean overwrite, int bufferSize, + short replication, long blockSize, Progressable progressable) throws IOException { + TransferedPath newPath = transferToNewPath(path); + return newPath.getFS() + .create(newPath.toPath(), fsPermission, overwrite, bufferSize, replication, blockSize, progressable); + } + + @Override + public HdfsDataOutputStream create(final Path f, final FsPermission permission, final boolean overwrite, + final int bufferSize, final short replication, final long blockSize, final Progressable progress, + final InetSocketAddress[] favoredNodes) throws UnsupportedOperationException, IOException { + TransferedPath newPath = transferToNewPath(f); + if (newPath.getFS() == underHDFS) { + return underHDFS.create(newPath.toPath(), permission, overwrite, bufferSize, replication, blockSize, + progress, favoredNodes); + } + throw new UnsupportedOperationException( + "Not implemented create(final Path f, final FsPermission permission, final boolean overwrite, final int bufferSize, final short replication, final long blockSize, final Progressable progress, final InetSocketAddress[] favoredNodes)!"); + } + + @Override + public FSDataOutputStream create(Path f, final FsPermission permission, final EnumSet cflags, + final int bufferSize, final short replication, final long blockSize, final Progressable progress, + final Options.ChecksumOpt checksumOpt) throws UnsupportedOperationException, IOException { + TransferedPath newPath = transferToNewPath(f); + return newPath.getFS() + .create(newPath.toPath(), permission, cflags, bufferSize, replication, blockSize, progress, checksumOpt); + } + + @Override + public FSDataOutputStream createNonRecursive(Path path, final FsPermission permission, + final EnumSet flag, final int bufferSize, final short replication, final long blockSize, + final Progressable progress) throws IOException { + TransferedPath newPath = transferToNewPath(path); + return newPath.getFS() + .createNonRecursive(newPath.toPath(), permission, flag, bufferSize, replication, blockSize, progress); + } + + @Override + public boolean setReplication(Path src, final short replication) throws IOException { + TransferedPath newPath = transferToNewPath(src); + return newPath.getFS().setReplication(newPath.toPath(), replication); + } + + @Override + public void setStoragePolicy(Path src, final String policyName) throws UnsupportedOperationException, IOException { + TransferedPath newPath = transferToNewPath(src); + if (newPath.getFS() == underHDFS) { + underHDFS.setStoragePolicy(newPath.toPath(), policyName); + } else { + throw new UnsupportedOperationException( + "Not implemented setStoragePolicy(Path src, final String policyName)!"); + } + } + + @Override + public void unsetStoragePolicy(final Path src) throws UnsupportedOperationException, IOException { + TransferedPath newPath = transferToNewPath(src); + if (newPath.getFS() == underHDFS) { + underHDFS.unsetStoragePolicy(newPath.toPath()); + } else { + throw new UnsupportedOperationException("Not implemented unsetStoragePolicy"); + } + } + + @Override + public BlockStoragePolicySpi getStoragePolicy(Path path) throws UnsupportedOperationException, IOException { + TransferedPath newPath = transferToNewPath(path); + if (newPath.getFS() == underHDFS) { + return underHDFS.getStoragePolicy(newPath.toPath()); + } + throw new UnsupportedOperationException("Not implemented getStoragePolicy(Path path)!"); + } + + @Override + public void concat(Path trg, Path[] psrcs) throws IOException { + TransferedPath newPath = transferToNewPath(trg); + List newpsrcs = Arrays.stream(psrcs).map(this::transferToNewPath).collect(Collectors.toList()); + String targetAuthority = newPath.getFS().getUri().getAuthority(); + String targetSchema = newPath.getFS().getUri().getScheme(); + + for (TransferedPath newpsrc : newpsrcs) { + if (!newpsrc.getFS().getUri().getScheme().equals(targetSchema) || !newpsrc.getFS() + .getUri() + .getAuthority() + .equals(targetAuthority)) { + throw new UnsupportedOperationException( + "can not concat files across the filesystem, target filesystem: " + newPath.getFS().getUri() + + ", source: " + newpsrc.getFS().getUri()); + } + } + + Path[] concatPaths = newpsrcs.stream().map(TransferedPath::toPath).toArray(Path[]::new); + newPath.getFS().concat(newPath.toPath(), concatPaths); + } + + @Override + public boolean rename(Path src, Path dst) throws IOException { + TransferedPath newPath = transferToNewPath(src); + TransferedPath newPathDest = transferToNewPath(dst); + + // Objects.equals will handle the NULL authority case + if (!Objects.equals(newPath.getFS().getUri().getAuthority(), newPathDest.getFS().getUri().getAuthority())) { + throw new IOException(new UnsupportedOperationException( + "can not support rename across filesystem. srcfs: " + newPath.getFS().getUri() + ", dsffs: " + + newPathDest.getFS().getUri())); + } + + return newPath.getFS().rename(newPath.toPath(), newPathDest.toPath()); + } + + @Override + public void rename(Path src, Path dst, final Options.Rename... options) + throws UnsupportedOperationException, IOException { + TransferedPath newPath = transferToNewPath(src); + TransferedPath newPathDest = transferToNewPath(dst); + + if (!Objects.equals(newPath.getFS().getUri().getAuthority(), newPathDest.getFS().getUri().getAuthority())) { + throw new IOException(new UnsupportedOperationException("can not support rename across filesystem")); + } + + if (newPath.toPath().toString().equals(newPathDest.toPath().toString())) { + return; + } + + try { + Class classType = newPath.getFS().getClass(); + Method method = classType.getDeclaredMethod("rename", Path.class, Path.class, Options.Rename[].class); + method.setAccessible(true); + method.invoke(newPath.getFS(), new Object[] {newPath.toPath(), newPathDest.toPath(), options}); + return; + } catch (NoSuchMethodException e) { + // ignore, use rename function, go to below code + LOG.warn("ignore, use rename function, go to below code", e); + } catch (IllegalAccessException | InvocationTargetException e) { + LOG.warn("use reflection rename failed, ignore this and use FileSystem.rename method.", e); + } + + if (options.length > 0 && options[0] == Options.Rename.OVERWRITE) { + newPath.getFS().delete(newPathDest.toPath(), false); + newPath.getFS().rename(newPath.toPath(), newPathDest.toPath()); + } else { + newPath.getFS().rename(newPath.toPath(), newPathDest.toPath()); + } + } + + @Override + public boolean truncate(Path f, final long newLength) throws IOException { + TransferedPath newPath = transferToNewPath(f); + return newPath.getFS().truncate(newPath.toPath(), newLength); + } + + @Override + public boolean delete(Path path, boolean recursive) throws IOException { + TransferedPath newPath = transferToNewPath(path); + return newPath.getFS().delete(newPath.toPath(), recursive); + } + + @Override + public ContentSummary getContentSummary(Path f) throws IOException { + TransferedPath newPath = transferToNewPath(f); + return newPath.getFS().getContentSummary(newPath.toPath()); + } + + @Override + public QuotaUsage getQuotaUsage(Path f) throws UnsupportedOperationException, IOException { + TransferedPath newPath = transferToNewPath(f); + return newPath.getFS().getQuotaUsage(newPath.toPath()); + + } + + @Override + public void setQuota(Path src, final long namespaceQuota, final long storagespaceQuota) + throws UnsupportedOperationException, IOException { + TransferedPath newPath = transferToNewPath(src); + if (newPath.getFS() == underHDFS) { + underHDFS.setQuota(newPath.toPath(), namespaceQuota, storagespaceQuota); + } else { + throw new UnsupportedOperationException( + "Not implemented setQuota(Path src, final long namespaceQuota, final long storagespaceQuota)!"); + } + } + + @Override + public void setQuotaByStorageType(Path src, final StorageType type, final long quota) + throws UnsupportedOperationException, IOException { + TransferedPath newPath = transferToNewPath(src); + if (newPath.getFS() == underHDFS) { + underHDFS.setQuotaByStorageType(newPath.toPath(), type, quota); + } else { + throw new UnsupportedOperationException( + "Not implemented setQuotaByStorageType(Path src, final StorageType type, final long quota)!"); + } + } + + @Override + public FileStatus[] listStatus(Path p) throws IOException { + TransferedPath newPath = transferToNewPath(p); + if (newPath.getFS() == underHDFS) { + return underHDFS.listStatus(newPath.toPath()); + } + FileStatus[] fileStatuses = newPath.getFS().listStatus(newPath.toPath()); + for (int i = 0; i < fileStatuses.length; i++) { + + fileStatuses[i].setPath(transferToWrappedPath(fileStatuses[i].getPath(), newPath)); + } + return fileStatuses; + } + + @Override + public RemoteIterator listFiles(Path f, boolean recursive) + throws FileNotFoundException, IOException { + TransferedPath newPath = transferToNewPath(f); + return new WrappedRemoteIterator(newPath.getFS().listFiles(newPath.toPath(), recursive), + fileStatus -> { + Path originPath = transferToWrappedPath(fileStatus.getPath(), newPath); + fileStatus.setPath(originPath); + return fileStatus; + }); + } + + @Override + public RemoteIterator listLocatedStatus(Path p, final PathFilter filter) throws IOException { + TransferedPath newPath = transferToNewPath(p); + return new WrappedRemoteIterator(newPath.getFS().listLocatedStatus(newPath.toPath()), + fileStatus -> { + Path originPath = transferToWrappedPath(fileStatus.getPath(), newPath); + fileStatus.setPath(originPath); + return fileStatus; + }); + } + + @Override + public RemoteIterator listStatusIterator(Path p) throws IOException { + TransferedPath newPath = transferToNewPath(p); + return new WrappedRemoteIterator(newPath.getFS().listStatusIterator(newPath.toPath()), + fileStatus -> { + Path originPath = transferToWrappedPath(fileStatus.getPath(), newPath); + fileStatus.setPath(originPath); + return fileStatus; + }); + } + + @Override + public boolean mkdir(Path f, FsPermission permission) throws IOException { + TransferedPath newPath = transferToNewPath(f); + if (newPath.getFS() == underHDFS) { + return underHDFS.mkdir(newPath.toPath(), permission); + } else { + return newPath.getFS().mkdirs(newPath.toPath(), permission); + } + } + + @Override + public boolean mkdirs(Path path, FsPermission fsPermission) throws IOException { + TransferedPath newPath = transferToNewPath(path); + return newPath.getFS().mkdirs(newPath.toPath(), fsPermission); + } + + @Override + public void close() throws IOException { + IOException ex = null; + try { + super.close(); + } catch (IOException e) { + ex = e; + LOG.error("failed to close", e); + } + for (MountInfo value : mountMap.values()) { + try { + if (value.toFileSystem != null) { + value.toFileSystem.close(); + } + } catch (IOException e) { + ex = e; + LOG.error("failed to close " + value, e); + } + } + + if (underHDFS != null) { + underHDFS.close(); + } + + if (ex != null) { + throw ex; + } + } + + @Override + public FsStatus getStatus(Path p) throws IOException { + TransferedPath newPath = transferToNewPath(p); + return newPath.getFS().getStatus(newPath.toPath()); + } + + @Override + public RemoteIterator listCorruptFileBlocks(Path path) throws IOException { + TransferedPath newPath = transferToNewPath(path); + return new WrappedRemoteIterator(newPath.getFS().listCorruptFileBlocks(newPath.toPath()), + p -> transferToWrappedPath(p, newPath)); + } + + //此函数是为了判断文件或者文件夹的状态,并不是进行真实的操作 + @Override + public FileStatus getFileStatus(Path path) throws IOException { + TransferedPath newPath = transferToNewPath(path); + + FileStatus fileStatus = newPath.getFS().getFileStatus(newPath.toPath()); + //因为这一步走了好多坑,一定要把path再转化成原来的模样 + Optional.ofNullable(fileStatus).ifPresent(f -> f.setPath(path)); + return fileStatus; + } + + @Override + public void createSymlink(final Path target, Path link, final boolean createParent) throws IOException { + TransferedPath newTarget = transferToNewPath(target); + TransferedPath newLink = transferToNewPath(link); + + if (!newTarget.getFS().getUri().getAuthority().equals(newLink.getFS().getUri().getAuthority())) { + throw new UnsupportedOperationException("can not support createSymlink across filesystem"); + } + + newLink.getFS().createSymlink(newTarget.toPath(), newLink.toPath(), createParent); + } + + @Override + public FileStatus getFileLinkStatus(Path f) throws IOException { + TransferedPath newPath = transferToNewPath(f); + if (newPath.getFS() == underHDFS) { + return underHDFS.getFileLinkStatus(newPath.toPath()); + } + FileStatus fileStatus = newPath.getFS().getFileLinkStatus(newPath.toPath()); + //因为这一步走了好多坑,一定要把path再转化成原来的模样 + Optional.ofNullable(fileStatus).ifPresent(fileStatus1 -> fileStatus1.setPath(f)); + return fileStatus; + } + + @Override + public Path getLinkTarget(Path f) throws IOException { + TransferedPath newPath = transferToNewPath(f); + if (newPath.getFS() == underHDFS) { + return underHDFS.getLinkTarget(newPath.toPath()); + } + return newPath.getFS().getLinkTarget(newPath.toPath()); + } + + @Override + public FileChecksum getFileChecksum(Path f) throws IOException { + TransferedPath newPath = transferToNewPath(f); + return newPath.getFS().getFileChecksum(newPath.toPath()); + } + + @Override + public FileChecksum getFileChecksum(Path f, final long length) throws IOException { + TransferedPath newPath = transferToNewPath(f); + return newPath.getFS().getFileChecksum(newPath.toPath(), length); + } + + @Override + public void setPermission(Path p, final FsPermission permission) throws IOException { + TransferedPath newPath = transferToNewPath(p); + newPath.getFS().setPermission(newPath.toPath(), permission); + } + + public void setOwner(Path p, final String username, final String groupname) + throws UnsupportedOperationException, IOException { + TransferedPath newPath = transferToNewPath(p); + newPath.getFS().setOwner(newPath.toPath(), username, groupname); + } + + @Override + public void setTimes(Path p, final long mtime, final long atime) throws UnsupportedOperationException, IOException { + TransferedPath newPath = transferToNewPath(p); + newPath.getFS().setTimes(newPath.toPath(), mtime, atime); + } + + @Override + public Path createSnapshot(final Path path, final String snapshotName) throws IOException { + TransferedPath newPath = transferToNewPath(path); + return newPath.getFS().createSnapshot(newPath.toPath(), snapshotName); + } + + @Override + public void renameSnapshot(final Path path, final String snapshotOldName, final String snapshotNewName) + throws IOException { + TransferedPath newPath = transferToNewPath(path); + newPath.getFS().renameSnapshot(newPath.toPath(), snapshotOldName, snapshotNewName); + } + + @Override + public void deleteSnapshot(final Path snapshotDir, final String snapshotName) throws IOException { + TransferedPath newPath = transferToNewPath(snapshotDir); + newPath.getFS().deleteSnapshot(newPath.toPath(), snapshotName); + } + + @Override + public void modifyAclEntries(Path path, final List aclSpec) throws IOException { + TransferedPath newPath = transferToNewPath(path); + newPath.getFS().modifyAclEntries(newPath.toPath(), aclSpec); + } + + @Override + public void removeAclEntries(Path path, final List aclSpec) throws IOException { + TransferedPath newPath = transferToNewPath(path); + newPath.getFS().removeAclEntries(newPath.toPath(), aclSpec); + } + + @Override + public void removeDefaultAcl(Path path) throws IOException { + TransferedPath newPath = transferToNewPath(path); + newPath.getFS().removeDefaultAcl(newPath.toPath()); + } + + @Override + public void removeAcl(Path path) throws IOException { + TransferedPath newPath = transferToNewPath(path); + newPath.getFS().removeAcl(newPath.toPath()); + } + + @Override + public void setAcl(Path path, final List aclSpec) throws IOException { + TransferedPath newPath = transferToNewPath(path); + newPath.getFS().setAcl(newPath.toPath(), aclSpec); + } + + @Override + public AclStatus getAclStatus(Path path) throws IOException { + TransferedPath newPath = transferToNewPath(path); + return newPath.getFS().getAclStatus(newPath.toPath()); + } + + @Override + public void setXAttr(Path path, final String name, final byte[] value, final EnumSet flag) + throws IOException { + TransferedPath newPath = transferToNewPath(path); + newPath.getFS().setXAttr(newPath.toPath(), name, value, flag); + } + + @Override + public byte[] getXAttr(Path path, final String name) throws IOException { + TransferedPath newPath = transferToNewPath(path); + return newPath.getFS().getXAttr(newPath.toPath(), name); + } + + @Override + public Map getXAttrs(Path path) throws IOException { + TransferedPath newPath = transferToNewPath(path); + return newPath.getFS().getXAttrs(newPath.toPath()); + } + + @Override + public Map getXAttrs(Path path, final List names) throws IOException { + TransferedPath newPath = transferToNewPath(path); + return newPath.getFS().getXAttrs(newPath.toPath(), names); + } + + public List listXAttrs(Path path) throws IOException { + TransferedPath newPath = transferToNewPath(path); + return newPath.getFS().listXAttrs(newPath.toPath()); + } + + @Override + public void removeXAttr(Path path, final String name) throws IOException { + TransferedPath newPath = transferToNewPath(path); + newPath.getFS().removeXAttr(newPath.toPath(), name); + } + + @Override + public void access(Path path, final FsAction mode) throws IOException { + TransferedPath newPath = transferToNewPath(path); + newPath.getFS().access(newPath.toPath(), mode); + } + + @Override + public Path getTrashRoot(Path path) { + TransferedPath newPath = transferToNewPath(path); + try { + return toWrappedPath(newPath.getFS().getTrashRoot(newPath.toPath()), newPath.getFS()); + } catch (IOException e) { + throw new UncheckException(e); + } + } + + private static class WrappedRemoteIterator implements RemoteIterator { + + private final RemoteIterator origin; + + private final Function convertFunc; + + WrappedRemoteIterator(RemoteIterator origin, Function convertFunc) { + this.origin = origin; + this.convertFunc = convertFunc; + } + + @Override + public boolean hasNext() throws IOException { + return origin.hasNext(); + } + + @Override + public Object next() throws IOException { + return convertFunc.apply(origin.next()); + } + } + + static class MountInfo { + String fromPath; + + String toPath; + + Supplier toFileSystemSupplier; + + FileSystem toFileSystem = null; + + public MountInfo(String from, String to, Supplier toFileSystem) { + this.fromPath = from; + this.toPath = to; + this.toFileSystemSupplier = toFileSystem; + } + + public String getFromPath() { + return fromPath; + } + + public String getToPath() { + return toPath; + } + + public FileSystem getToFileSystem() throws IOException { + if (toFileSystem != null) { + return toFileSystem; + } + if (toFileSystemSupplier != null) { + initToFileSystem(); + } + return toFileSystem; + } + + private synchronized void initToFileSystem() throws IOException { + if (toFileSystem == null) { + try { + toFileSystem = toFileSystemSupplier.get(); + } catch (UncheckException e) { + throw e.getException(); + } + } + } + } + + static class TransferedPath { + private final boolean isReservedPath; + + MountInfo mountInfo; + + String remainPath; + + public TransferedPath(MountInfo mountInfo, String remainPath, boolean isReservedPath) { + this.mountInfo = mountInfo; + this.remainPath = remainPath; + this.isReservedPath = isReservedPath; + } + + public TransferedPath(MountInfo mountInfo, String remainPath) { + this(mountInfo, remainPath, false); + } + + public MountInfo getMountInfo() { + return mountInfo; + } + + public String getRemainPath() { + return remainPath; + } + + /** + * get the mounted filesystem + * if the mounted filesystem is current MRSHDFSWrapperFileSystem instance, + * use MRSHDFSWrapperFileSystem.super implementation + * + * @return filesystem in mount + * @throws IOException + */ + public FileSystem getFS() throws IOException { + return mountInfo.getToFileSystem(); + } + + public Path toPath() { + if (remainPath == null || remainPath.trim().length() == 0) { + return new Path(mountInfo.toPath); + } else if (isReservedPath) { + return new Path(remainPath); + } else { + return remainPath.length() > 1 + ? new Path(mountInfo.toPath, remainPath.substring(1)) + : new Path(mountInfo.toPath); + } + } + } + + static class UncheckException extends RuntimeException { + public UncheckException(IOException origin) { + super(origin); + } + + public IOException getException() { + return (IOException) getCause(); + } + } + +} diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSHDFSWrapper.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSHDFSWrapper.java new file mode 100644 index 0000000..91a85c0 --- /dev/null +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSHDFSWrapper.java @@ -0,0 +1,42 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.obs; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.DelegateToFileSystem; +import org.apache.hadoop.hdfs.protocol.HdfsConstants; + +import java.io.IOException; +import java.net.URI; +import java.net.URISyntaxException; + +/** + * Implementation of AbstractFileSystem based on the existing implementation of + * {@link OBSHDFSFileSystem}. + */ +@InterfaceAudience.Public +@InterfaceStability.Evolving +public class OBSHDFSWrapper extends DelegateToFileSystem { + + protected OBSHDFSWrapper(final URI theUri, final Configuration conf) throws IOException, URISyntaxException { + super(theUri, new OBSHDFSFileSystem(), conf, HdfsConstants.HDFS_URI_SCHEME, false); + } +} diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSIOException.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSIOException.java index c42ebee..2d56d5f 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSIOException.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSIOException.java @@ -26,7 +26,7 @@ /** * IOException equivalent to {@link ObsException}. */ -class OBSIOException extends IOException { +public class OBSIOException extends IOException { private static final long serialVersionUID = -1582681108285856259L; /** @@ -36,8 +36,7 @@ class OBSIOException extends IOException { OBSIOException(final String operationMsg, final ObsException cause) { super(cause); - Preconditions.checkArgument(operationMsg != null, - "Null 'operation' argument"); + Preconditions.checkArgument(operationMsg != null, "Null 'operation' argument"); Preconditions.checkArgument(cause != null, "Null 'cause' argument"); this.operation = operationMsg; } @@ -48,7 +47,6 @@ public ObsException getCause() { @Override public String getMessage() { - return operation + ": " + getCause().getErrorMessage() - + ", detailMessage: " + super.getMessage(); + return operation + ": " + getCause().getErrorMessage() + ", detailMessage: " + super.getMessage(); } } diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputPolicy.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputPolicy.java deleted file mode 100644 index bb16926..0000000 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputPolicy.java +++ /dev/null @@ -1,70 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.hadoop.fs.obs; - -import org.apache.hadoop.classification.InterfaceAudience; -import org.apache.hadoop.classification.InterfaceStability; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -import java.util.Locale; - -import static org.apache.hadoop.fs.obs.Constants.*; - -/** Filesystem input policy. */ -@InterfaceAudience.Private -@InterfaceStability.Unstable -public enum OBSInputPolicy { - Normal(INPUT_FADV_NORMAL), - Sequential(INPUT_FADV_SEQUENTIAL), - Random(INPUT_FADV_RANDOM); - - private static final Logger LOG = LoggerFactory.getLogger(OBSInputPolicy.class); - private final String policy; - - OBSInputPolicy(String policy) { - this.policy = policy; - } - - /** - * Choose an FS access policy. Always returns something, primarily by downgrading to "normal" if - * there is no other match. - * - * @param name strategy name from a configuration option, etc. - * @return the chosen strategy - */ - public static OBSInputPolicy getPolicy(String name) { - String trimmed = name.trim().toLowerCase(Locale.ENGLISH); - switch (trimmed) { - case INPUT_FADV_NORMAL: - return Normal; - case INPUT_FADV_RANDOM: - return Random; - case INPUT_FADV_SEQUENTIAL: - return Sequential; - default: - LOG.warn("Unrecognized " + INPUT_FADVISE + " value: \"{}\"", trimmed); - return Normal; - } - } - - @Override - public String toString() { - return policy; - } -} diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSListing.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSListing.java index b36a6b6..3b84a49 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSListing.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSListing.java @@ -44,18 +44,17 @@ class OBSListing { /** * A Path filter which accepts all filenames. */ - static final PathFilter ACCEPT_ALL = - new PathFilter() { - @Override - public boolean accept(final Path file) { - return true; - } + static final PathFilter ACCEPT_ALL = new PathFilter() { + @Override + public boolean accept(final Path file) { + return true; + } - @Override - public String toString() { - return "ACCEPT_ALL"; - } - }; + @Override + public String toString() { + return "ACCEPT_ALL"; + } + }; /** * Class logger. @@ -83,14 +82,9 @@ public String toString() { * @return the iterator * @throws IOException IO Problems */ - FileStatusListingIterator createFileStatusListingIterator( - final Path listPath, - final ListObjectsRequest request, - final PathFilter filter, - final FileStatusAcceptor acceptor) - throws IOException { - return new FileStatusListingIterator( - new ObjectListingIterator(listPath, request), filter, acceptor); + FileStatusListingIterator createFileStatusListingIterator(final Path listPath, final ListObjectsRequest request, + final PathFilter filter, final FileStatusAcceptor acceptor) throws IOException { + return new FileStatusListingIterator(new ObjectListingIterator(listPath, request), filter, acceptor); } /** @@ -99,8 +93,7 @@ FileStatusListingIterator createFileStatusListingIterator( * @param statusIterator an iterator over the remote status entries * @return a new remote iterator */ - LocatedFileStatusIterator createLocatedFileStatusIterator( - final RemoteIterator statusIterator) { + LocatedFileStatusIterator createLocatedFileStatusIterator(final RemoteIterator statusIterator) { return new LocatedFileStatusIterator(statusIterator); } @@ -141,8 +134,7 @@ interface FileStatusAcceptor { * calls where the path handed in refers to a file, not a directory: this is * the iterator returned. */ - static final class SingleStatusRemoteIterator - implements RemoteIterator { + static final class SingleStatusRemoteIterator implements RemoteIterator { /** * The status to return; set to null after the first iteration. @@ -217,11 +209,8 @@ static class AcceptFilesOnly implements FileStatusAcceptor { */ @Override public boolean accept(final Path keyPath, final ObsObject summary) { - return !keyPath.equals(qualifiedPath) - && !summary.getObjectKey() - .endsWith(OBSConstants.OBS_FOLDER_SUFFIX) - && !OBSCommonUtils.objectRepresentsDirectory( - summary.getObjectKey(), + return !keyPath.equals(qualifiedPath) && !summary.getObjectKey().endsWith(OBSConstants.OBS_FOLDER_SUFFIX) + && !OBSCommonUtils.objectRepresentsDirectory(summary.getObjectKey(), summary.getMetadata().getContentLength()); } @@ -269,8 +258,7 @@ static class AcceptAllButSelfAndS3nDirs implements FileStatusAcceptor { */ @Override public boolean accept(final Path keyPath, final ObsObject summary) { - return !keyPath.equals(qualifiedPath) && !summary.getObjectKey() - .endsWith(OBSConstants.OBS_FOLDER_SUFFIX); + return !keyPath.equals(qualifiedPath) && !summary.getObjectKey().endsWith(OBSConstants.OBS_FOLDER_SUFFIX); } /** @@ -351,10 +339,8 @@ class FileStatusListingIterator implements RemoteIterator { * file status. * @throws IOException IO Problems */ - FileStatusListingIterator( - final ObjectListingIterator listPath, final PathFilter pathFilter, - final FileStatusAcceptor fileStatusAcceptor) - throws IOException { + FileStatusListingIterator(final ObjectListingIterator listPath, final PathFilter pathFilter, + final FileStatusAcceptor fileStatusAcceptor) throws IOException { this.source = listPath; this.filter = pathFilter; this.acceptor = fileStatusAcceptor; @@ -403,8 +389,7 @@ private boolean requestNextBatch() throws IOException { // declare that the request was successful return true; } else { - LOG.debug( - "All entries in batch were filtered...continuing"); + LOG.debug("All entries in batch were filtered...continuing"); } } // if this code is reached, it means that all remaining @@ -424,26 +409,18 @@ private boolean buildNextStatusBatch(final ObjectListing objects) { int added = 0; int ignored = 0; // list to fill in with results. Initial size will be list maximum. - List stats = - new ArrayList<>( - objects.getObjects().size() + objects.getCommonPrefixes() - .size()); + List stats = new ArrayList<>(objects.getObjects().size() + objects.getCommonPrefixes().size()); // objects for (ObsObject summary : objects.getObjects()) { String key = summary.getObjectKey(); Path keyPath = OBSCommonUtils.keyToQualifiedPath(owner, key); if (LOG.isDebugEnabled()) { - LOG.debug("{}: {}", keyPath, - OBSCommonUtils.stringify(summary)); + LOG.debug("{}: {}", keyPath, OBSCommonUtils.stringify(summary)); } // Skip over keys that are ourselves and old OBS _$folder$ files - if (acceptor.accept(keyPath, summary) && filter.accept( - keyPath)) { - FileStatus status = - OBSCommonUtils.createFileStatus( - keyPath, summary, - owner.getDefaultBlockSize(keyPath), - owner.getShortUserName()); + if (acceptor.accept(keyPath, summary) && filter.accept(keyPath)) { + FileStatus status = OBSCommonUtils.createFileStatus(keyPath, summary, + owner.getDefaultBlockSize(keyPath), owner.getShortUserName()); LOG.debug("Adding: {}", status); stats.add(status); added++; @@ -458,13 +435,11 @@ private boolean buildNextStatusBatch(final ObjectListing objects) { String key = prefix.getObjectKey(); Path keyPath = OBSCommonUtils.keyToQualifiedPath(owner, key); if (acceptor.accept(keyPath, key) && filter.accept(keyPath)) { - long lastModified = - prefix.getMetadata().getLastModified() == null - ? System.currentTimeMillis() - : OBSCommonUtils.dateToLong( - prefix.getMetadata().getLastModified()); - FileStatus status = new OBSFileStatus(keyPath, lastModified, - lastModified, owner.getShortUserName()); + long lastModified = prefix.getMetadata().getLastModified() == null + ? System.currentTimeMillis() + : OBSCommonUtils.dateToLong(prefix.getMetadata().getLastModified()); + FileStatus status = new OBSFileStatus(keyPath, lastModified, lastModified, + owner.getShortUserName()); LOG.debug("Adding directory: {}", status); added++; stats.add(status); @@ -478,11 +453,7 @@ private boolean buildNextStatusBatch(final ObjectListing objects) { batchSize = stats.size(); statusBatchIterator = stats.listIterator(); boolean hasNext = statusBatchIterator.hasNext(); - LOG.debug( - "Added {} entries; ignored {}; hasNext={}; hasMoreObjects={}", - added, - ignored, - hasNext, + LOG.debug("Added {} entries; ignored {}; hasNext={}; hasMoreObjects={}", added, ignored, hasNext, objects.isTruncated()); return hasNext; } @@ -554,9 +525,7 @@ class ObjectListingIterator implements RemoteIterator { * @param request initial request to make * @throws IOException on any failure to list objects */ - ObjectListingIterator(final Path path, - final ListObjectsRequest request) - throws IOException { + ObjectListingIterator(final Path path, final ListObjectsRequest request) throws IOException { this.listPath = path; this.maxKeys = owner.getMaxKeys(); this.objects = OBSCommonUtils.listObjects(owner, request); @@ -592,19 +561,15 @@ public ObjectListing next() throws IOException { try { if (!objects.isTruncated()) { // nothing more to request: fail. - throw new NoSuchElementException( - "No more results in listing of " + listPath); + throw new NoSuchElementException("No more results in listing of " + listPath); } // need to request a new set of objects. - LOG.debug("[{}], Requesting next {} objects under {}", - listingCount, maxKeys, listPath); - objects = OBSCommonUtils.continueListObjects(owner, - objects); + LOG.debug("[{}], Requesting next {} objects under {}", listingCount, maxKeys, listPath); + objects = OBSCommonUtils.continueListObjects(owner, objects); listingCount++; LOG.debug("New listing status: {}", this); } catch (ObsException e) { - throw OBSCommonUtils.translateException("listObjects()", - listPath, e); + throw OBSCommonUtils.translateException("listObjects()", listPath, e); } } return objects; @@ -612,11 +577,7 @@ public ObjectListing next() throws IOException { @Override public String toString() { - return "Object listing iterator against " - + listPath - + "; listing count " - + listingCount - + "; isTruncated=" + return "Object listing iterator against " + listPath + "; listing count " + listingCount + "; isTruncated=" + objects.isTruncated(); } @@ -626,8 +587,7 @@ public String toString() { * Take a remote iterator over a set of {@link FileStatus} instances and * return a remote iterator of {@link LocatedFileStatus} instances. */ - class LocatedFileStatusIterator - implements RemoteIterator { + class LocatedFileStatusIterator implements RemoteIterator { /** * File status. */ @@ -639,8 +599,7 @@ class LocatedFileStatusIterator * @param statusRemoteIterator an iterator over the remote status * entries */ - LocatedFileStatusIterator( - final RemoteIterator statusRemoteIterator) { + LocatedFileStatusIterator(final RemoteIterator statusRemoteIterator) { this.statusIterator = statusRemoteIterator; } @@ -651,8 +610,7 @@ public boolean hasNext() throws IOException { @Override public LocatedFileStatus next() throws IOException { - return OBSCommonUtils.toLocatedFileStatus(owner, - statusIterator.next()); + return OBSCommonUtils.toLocatedFileStatus(owner, statusIterator.next()); } } } diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSLocalDirAllocator.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSLocalDirAllocator.java index 4d03eb4..a2c21be 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSLocalDirAllocator.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSLocalDirAllocator.java @@ -55,8 +55,7 @@ public class OBSLocalDirAllocator { //A Map from the config item names like "mapred.local.dir" //to the instance of the AllocatorPerContext. This //is a static object to make sure there exists exactly one instance per JVM - private static Map contexts = - new TreeMap(); + private static Map contexts = new TreeMap(); private String contextCfgItemName; @@ -103,8 +102,7 @@ private OBSAllocatorPerContext obtainContext(String contextCfgItemName) { * @return the complete path to the file on a local disk * @throws IOException */ - public Path getLocalPathForWrite(String pathStr, - Configuration conf) throws IOException { + public Path getLocalPathForWrite(String pathStr, Configuration conf) throws IOException { return getLocalPathForWrite(pathStr, SIZE_UNKNOWN, conf); } @@ -121,8 +119,7 @@ public Path getLocalPathForWrite(String pathStr, * @return the complete path to the file on a local disk * @throws IOException */ - public Path getLocalPathForWrite(String pathStr, long size, - Configuration conf) throws IOException { + public Path getLocalPathForWrite(String pathStr, long size, Configuration conf) throws IOException { return getLocalPathForWrite(pathStr, size, conf, true); } @@ -140,9 +137,8 @@ public Path getLocalPathForWrite(String pathStr, long size, * @return the complete path to the file on a local disk * @throws IOException */ - public Path getLocalPathForWrite(String pathStr, long size, - Configuration conf, - boolean checkWrite) throws IOException { + public Path getLocalPathForWrite(String pathStr, long size, Configuration conf, boolean checkWrite) + throws IOException { OBSAllocatorPerContext context = obtainContext(contextCfgItemName); return context.getLocalPathForWrite(pathStr, size, conf, checkWrite); } @@ -157,8 +153,7 @@ public Path getLocalPathForWrite(String pathStr, long size, * @return the complete path to the file on a local disk * @throws IOException */ - public Path getLocalPathToRead(String pathStr, - Configuration conf) throws IOException { + public Path getLocalPathToRead(String pathStr, Configuration conf) throws IOException { OBSAllocatorPerContext context = obtainContext(contextCfgItemName); return context.getLocalPathToRead(pathStr, conf); } @@ -171,9 +166,7 @@ public Path getLocalPathToRead(String pathStr, * @return all of the paths that exist under any of the roots * @throws IOException */ - public Iterable getAllLocalPathsToRead(String pathStr, - Configuration conf - ) throws IOException { + public Iterable getAllLocalPathsToRead(String pathStr, Configuration conf) throws IOException { OBSAllocatorPerContext context; synchronized (this) { context = obtainContext(contextCfgItemName); @@ -194,8 +187,7 @@ public Iterable getAllLocalPathsToRead(String pathStr, * @return a unique temporary file * @throws IOException */ - public File createTmpFileForWrite(String pathStr, long size, - Configuration conf) throws IOException { + public File createTmpFileForWrite(String pathStr, long size, Configuration conf) throws IOException { OBSAllocatorPerContext context = obtainContext(contextCfgItemName); return context.createTmpFileForWrite(pathStr, size, conf); } @@ -299,8 +291,7 @@ public OBSAllocatorPerContext(String contextCfgItemName) { * This method gets called everytime before any read/write to make sure * that any change to localDirs is reflected immediately. */ - private Context confChanged(Configuration conf) - throws IOException { + private Context confChanged(Configuration conf) throws IOException { Context ctx = currentContext.get(); String newLocalDirs = conf.get(contextCfgItemName); if (null == newLocalDirs) { @@ -340,8 +331,7 @@ private Context confChanged2(Configuration conf, String newLocalDirs) throws IOE log.warn("Failed to create " + dirStrings[i]); } } catch (IOException ie) { - log.warn("Failed to create " + dirStrings[i] + ": " - + ie.getMessage() + "\n", ie); + log.warn("Failed to create " + dirStrings[i] + ": " + ie.getMessage() + "\n", ie); } //ignore } ctx.localDirs = dirs.toArray(new Path[dirs.size()]); @@ -358,8 +348,7 @@ private Context confChanged2(Configuration conf, String newLocalDirs) throws IOE return ctx; } - private Path createPath(Path dir, String path, - boolean checkWrite) throws IOException { + private Path createPath(Path dir, String path, boolean checkWrite) throws IOException { Path file = new Path(dir, path); if (checkWrite) { //check whether we are able to create a directory here. If the disk @@ -392,8 +381,8 @@ int getCurrentDirectoryIndex() { * If size is not known, use roulette selection -- pick directories * with probability proportional to their available space. */ - public Path getLocalPathForWrite(String pathStr, long size, - Configuration conf, boolean checkWrite) throws IOException { + public Path getLocalPathForWrite(String pathStr, long size, Configuration conf, boolean checkWrite) + throws IOException { Context ctx = confChanged(conf); int numDirs = ctx.localDirs.length; int numDirsSearched = 0; @@ -442,8 +431,7 @@ public Path getLocalPathForWrite(String pathStr, long size, while (numDirsSearched < numDirs) { long capacity = ctx.dirDF[dirNum].getAvailable(); if (capacity > size) { - returnPath = - createPath(ctx.localDirs[dirNum], pathStr, checkWrite); + returnPath = createPath(ctx.localDirs[dirNum], pathStr, checkWrite); if (returnPath != null) { ctx.getAndIncrDirNumLastAccessed(numDirsSearched); break; @@ -459,8 +447,7 @@ public Path getLocalPathForWrite(String pathStr, long size, } //no path found - throw new DiskErrorException("Could not find any valid local " - + "directory for " + pathStr); + throw new DiskErrorException("Could not find any valid local " + "directory for " + pathStr); } /** @@ -470,8 +457,7 @@ public Path getLocalPathForWrite(String pathStr, long size, * a file on the first path which has enough space. The file is guaranteed * to go away when the JVM exits. */ - public File createTmpFileForWrite(String pathStr, long size, - Configuration conf) throws IOException { + public File createTmpFileForWrite(String pathStr, long size, Configuration conf) throws IOException { // find an appropriate directory Path path = getLocalPathForWrite(pathStr, size, conf, true); @@ -489,8 +475,7 @@ public File createTmpFileForWrite(String pathStr, long size, * configured dirs for the file's existence and return the complete * path to the file when we find one */ - public Path getLocalPathToRead(String pathStr, - Configuration conf) throws IOException { + public Path getLocalPathToRead(String pathStr, Configuration conf) throws IOException { Context ctx = confChanged(conf); int numDirs = ctx.localDirs.length; int numDirsSearched = 0; @@ -508,8 +493,8 @@ public Path getLocalPathToRead(String pathStr, } //no path found - throw new DiskErrorException("Could not find " + pathStr + " in any of" - + " the configured local directories"); + throw new DiskErrorException( + "Could not find " + pathStr + " in any of" + " the configured local directories"); } private static class PathIterator implements Iterator, Iterable { @@ -523,8 +508,7 @@ private static class PathIterator implements Iterator, Iterable { private Path next = null; - private PathIterator(FileSystem fs, String pathStr, Path[] rootDirs) - throws IOException { + private PathIterator(FileSystem fs, String pathStr, Path[] rootDirs) throws IOException { this.fs = fs; this.pathStr = pathStr; this.rootDirs = rootDirs; @@ -579,8 +563,7 @@ public Iterator iterator() { * @return all of the paths that exist under any of the roots * @throws IOException */ - Iterable getAllLocalPathsToRead(String pathStr, - Configuration conf) throws IOException { + Iterable getAllLocalPathsToRead(String pathStr, Configuration conf) throws IOException { Context ctx = confChanged(conf); if (pathStr.startsWith("/")) { pathStr = pathStr.substring(1); diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSLoginHelper.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSLoginHelper.java index 8cde5d7..f7d31a4 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSLoginHelper.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSLoginHelper.java @@ -18,6 +18,8 @@ package org.apache.hadoop.fs.obs; +import static org.apache.commons.lang.StringUtils.equalsIgnoreCase; + import org.apache.commons.lang.StringUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; @@ -31,8 +33,6 @@ import java.net.URLDecoder; import java.util.Objects; -import static org.apache.commons.lang.StringUtils.equalsIgnoreCase; - /** * Helper for OBS login. */ @@ -40,16 +40,14 @@ final class OBSLoginHelper { /** * login warning. */ - public static final String LOGIN_WARNING = - "The Filesystem URI contains login details." - + " This is insecure and may be unsupported in future."; + public static final String LOGIN_WARNING = "The Filesystem URI contains login details." + + " This is insecure and may be unsupported in future."; /** * plus warning. */ - public static final String PLUS_WARNING = - "Secret key contains a special character that should be URL encoded! " - + "Attempting to resolve..."; + public static final String PLUS_WARNING = "Secret key contains a special character that should be URL encoded! " + + "Attempting to resolve..."; /** * defined plus unencoded char. @@ -64,8 +62,7 @@ final class OBSLoginHelper { /** * Class logger. */ - private static final Logger LOG = LoggerFactory.getLogger( - OBSLoginHelper.class); + private static final Logger LOG = LoggerFactory.getLogger(OBSLoginHelper.class); private OBSLoginHelper() { } @@ -83,11 +80,8 @@ public static URI buildFSURI(final URI uri) { Objects.requireNonNull(uri, "null uri"); Objects.requireNonNull(uri.getScheme(), "null uri.getScheme()"); if (uri.getHost() == null && uri.getAuthority() != null) { - Objects.requireNonNull( - uri.getHost(), - "null uri host." - + " This can be caused by unencoded / in the " - + "password string"); + Objects.requireNonNull(uri.getHost(), + "null uri host." + " This can be caused by unencoded / in the " + "password string"); } Objects.requireNonNull(uri.getHost(), "null uri host."); return URI.create(uri.getScheme() + "://" + uri.getHost()); @@ -101,8 +95,7 @@ public static URI buildFSURI(final URI uri) { */ public static String toString(final URI pathUri) { return pathUri != null - ? String.format("%s://%s/%s", pathUri.getScheme(), - pathUri.getHost(), pathUri.getPath()) + ? String.format("%s://%s/%s", pathUri.getScheme(), pathUri.getHost(), pathUri.getPath()) : "(null URI)"; } @@ -145,8 +138,7 @@ public static Login extractLoginDetails(final URI name) { String encodedPassword = login.substring(loginSplit + 1); if (encodedPassword.contains(PLUS_UNENCODED)) { LOG.warn(PLUS_WARNING); - encodedPassword = encodedPassword.replaceAll( - "\\" + PLUS_UNENCODED, PLUS_ENCODED); + encodedPassword = encodedPassword.replaceAll("\\" + PLUS_UNENCODED, PLUS_ENCODED); } String password = URLDecoder.decode(encodedPassword, "UTF-8"); return new Login(user, password); @@ -178,19 +170,11 @@ public static URI canonicalizeUri(final URI uri, final int defaultPort) { if (uri.getPort() == -1 && defaultPort > 0) { // reconstruct the uri with the default port set try { - newUri = - new URI( - newUri.getScheme(), - null, - newUri.getHost(), - defaultPort, - newUri.getPath(), - newUri.getQuery(), - newUri.getFragment()); + newUri = new URI(newUri.getScheme(), null, newUri.getHost(), defaultPort, newUri.getPath(), + newUri.getQuery(), newUri.getFragment()); } catch (URISyntaxException e) { // Should never happen! - throw new AssertionError( - "Valid URI became unparseable: " + newUri); + throw new AssertionError("Valid URI became unparseable: " + newUri); } } @@ -222,8 +206,7 @@ public static URI canonicalizeUri(final URI uri, final int defaultPort) { * @param path path to check * @param defaultPort default port of FS */ - public static void checkPath(final Configuration conf, final URI fsUri, - final Path path, final int defaultPort) { + public static void checkPath(final Configuration conf, final URI fsUri, final Path path, final int defaultPort) { URI pathUri = path.toUri(); String thatScheme = pathUri.getScheme(); if (thatScheme == null) { @@ -236,13 +219,11 @@ public static void checkPath(final Configuration conf, final URI fsUri, if (equalsIgnoreCase(thisScheme, thatScheme)) { // schemes match String thisHost = thisUri.getHost(); String thatHost = pathUri.getHost(); - if (thatHost == null - && // path's host is null + if (thatHost == null && // path's host is null thisHost != null) { // fs has a host URI defaultUri = FileSystem.getDefaultUri(conf); if (equalsIgnoreCase(thisScheme, defaultUri.getScheme())) { - pathUri - = defaultUri; // schemes match, so use this uri instead + pathUri = defaultUri; // schemes match, so use this uri instead } else { pathUri = null; // can't determine auth of the path } @@ -257,9 +238,7 @@ public static void checkPath(final Configuration conf, final URI fsUri, } } // make sure the exception strips out any auth details - throw new IllegalArgumentException( - "Wrong FS " + OBSLoginHelper.toString(pathUri) + " -expected " - + fsUri); + throw new IllegalArgumentException("Wrong FS " + OBSLoginHelper.toString(pathUri) + " -expected " + fsUri); } /** @@ -298,8 +277,7 @@ public static class Login { this(userName, passwd, null); } - Login(final String userName, final String passwd, - final String sessionToken) { + Login(final String userName, final String passwd, final String sessionToken) { this.user = userName; this.password = passwd; this.token = sessionToken; @@ -329,8 +307,7 @@ public boolean equals(final Object o) { return false; } Login that = (Login) o; - return Objects.equals(user, that.user) && Objects.equals(password, - that.password); + return Objects.equals(user, that.user) && Objects.equals(password, that.password); } @Override diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java index f2ab827..d705b25 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSObjectBucketUtils.java @@ -45,13 +45,11 @@ /** * Object bucket specific utils for {@link OBSFileSystem}. */ -@Deprecated final class OBSObjectBucketUtils { /** * Class logger. */ - private static final Logger LOG = LoggerFactory.getLogger( - OBSObjectBucketUtils.class); + private static final Logger LOG = LoggerFactory.getLogger(OBSObjectBucketUtils.class); private OBSObjectBucketUtils() { @@ -74,10 +72,8 @@ private OBSObjectBucketUtils() { * @throws IOException on IO failure. * @throws ObsException on failures inside the OBS SDK */ - static boolean renameBasedOnObject(final OBSFileSystem owner, - final Path src, final Path dst) throws RenameFailedException, - FileNotFoundException, IOException, - ObsException { + static boolean renameBasedOnObject(final OBSFileSystem owner, final Path src, final Path dst) + throws RenameFailedException, FileNotFoundException, IOException, ObsException { String srcKey = OBSCommonUtils.pathToKey(owner, src); String dstKey = OBSCommonUtils.pathToKey(owner, dst); @@ -88,8 +84,7 @@ static boolean renameBasedOnObject(final OBSFileSystem owner, // get the source file status; this raises a FNFE if there is no source // file. - FileStatus srcStatus = OBSCommonUtils.innerGetFileStatusWithRetry(owner, - src); + FileStatus srcStatus = OBSCommonUtils.innerGetFileStatusWithRetry(owner, src); FileStatus dstStatus; try { @@ -100,34 +95,23 @@ static boolean renameBasedOnObject(final OBSFileSystem owner, // whether or not it can be the destination of the rename. if (dstStatus.isDirectory()) { String newDstKey = OBSCommonUtils.maybeAddTrailingSlash(dstKey); - String filename = srcKey.substring( - OBSCommonUtils.pathToKey(owner, src.getParent()).length() - + 1); + String filename = srcKey.substring(OBSCommonUtils.pathToKey(owner, src.getParent()).length() + 1); newDstKey = newDstKey + filename; dstKey = newDstKey; - dstStatus = OBSCommonUtils.innerGetFileStatusWithRetry( - owner, OBSCommonUtils.keyToPath(dstKey)); + dstStatus = OBSCommonUtils.innerGetFileStatusWithRetry(owner, OBSCommonUtils.keyToPath(dstKey)); if (dstStatus.isDirectory()) { - throw new RenameFailedException(src, dst, - "new destination is an existed directory") - .withExitCode(false); + throw new RenameFailedException(src, dst, "new destination is an existed directory").withExitCode( + false); } else { - throw new RenameFailedException(src, dst, - "new destination is an existed file") - .withExitCode(false); + throw new RenameFailedException(src, dst, "new destination is an existed file").withExitCode(false); } } else { if (srcKey.equals(dstKey)) { - LOG.warn( - "rename: src and dest refer to the same file or" - + " directory: {}", - dst); + LOG.warn("rename: src and dest refer to the same file or" + " directory: {}", dst); return true; } else { - throw new RenameFailedException(src, dst, - "destination is an existed file") - .withExitCode(false); + throw new RenameFailedException(src, dst, "destination is an existed file").withExitCode(false); } } } catch (FileNotFoundException e) { @@ -137,10 +121,8 @@ static boolean renameBasedOnObject(final OBSFileSystem owner, checkDestinationParent(owner, src, dst); } - if (dstKey.startsWith(srcKey) - && dstKey.charAt(srcKey.length()) == Path.SEPARATOR_CHAR) { - LOG.error("rename: dest [{}] cannot be a descendant of src [{}]", - dst, src); + if (dstKey.startsWith(srcKey) && dstKey.charAt(srcKey.length()) == Path.SEPARATOR_CHAR) { + LOG.error("rename: dest [{}] cannot be a descendant of src [{}]", dst, src); return false; } @@ -166,23 +148,18 @@ static boolean renameBasedOnObject(final OBSFileSystem owner, return true; } - private static void checkDestinationParent(final OBSFileSystem owner, - final Path src, - final Path dst) throws IOException { + private static void checkDestinationParent(final OBSFileSystem owner, final Path src, final Path dst) + throws IOException { Path parent = dst.getParent(); if (!OBSCommonUtils.pathToKey(owner, parent).isEmpty()) { try { - FileStatus dstParentStatus - = OBSCommonUtils.innerGetFileStatusWithRetry( - owner, dst.getParent()); + FileStatus dstParentStatus = OBSCommonUtils.innerGetFileStatusWithRetry(owner, dst.getParent()); if (!dstParentStatus.isDirectory()) { throw new ParentNotDirectoryException( - "destination parent [" + dst.getParent() - + "] is not a directory"); + "destination parent [" + dst.getParent() + "] is not a directory"); } } catch (FileNotFoundException e2) { - throw new RenameFailedException(src, dst, - "destination has no parent "); + throw new RenameFailedException(src, dst, "destination has no parent "); } } } @@ -196,11 +173,8 @@ private static void checkDestinationParent(final OBSFileSystem owner, * @param srcStatus source object status * @throws IOException any problem with rename operation */ - private static void renameFile(final OBSFileSystem owner, - final String srcKey, - final String dstKey, - final FileStatus srcStatus) - throws IOException { + private static void renameFile(final OBSFileSystem owner, final String srcKey, final String dstKey, + final FileStatus srcStatus) throws IOException { long startTime = System.nanoTime(); copyFile(owner, srcKey, dstKey, srcStatus.getLen()); @@ -208,26 +182,17 @@ private static void renameFile(final OBSFileSystem owner, if (LOG.isDebugEnabled()) { long delay = System.nanoTime() - startTime; - LOG.debug("OBSFileSystem rename: " - + ", {src=" - + srcKey - + ", dst=" - + dstKey - + ", delay=" - + delay - + "}"); + LOG.debug("OBSFileSystem rename: " + ", {src=" + srcKey + ", dst=" + dstKey + ", delay=" + delay + "}"); } } - static boolean objectDelete(final OBSFileSystem owner, - final FileStatus status, - final boolean recursive) throws IOException { + static boolean objectDelete(final OBSFileSystem owner, final FileStatus status, final boolean recursive) + throws IOException { Path f = status.getPath(); String key = OBSCommonUtils.pathToKey(owner, f); if (status.isDirectory()) { - LOG.debug("delete: Path is a directory: {} - recursive {}", f, - recursive); + LOG.debug("delete: Path is a directory: {} - recursive {}", f, recursive); key = OBSCommonUtils.maybeAddTrailingSlash(key); if (!key.endsWith("/")) { @@ -236,8 +201,7 @@ static boolean objectDelete(final OBSFileSystem owner, boolean isEmptyDir = OBSCommonUtils.isFolderEmpty(owner, key); if (key.equals("/")) { - return OBSCommonUtils.rejectRootDirectoryDelete( - owner.getBucket(), isEmptyDir, recursive); + return OBSCommonUtils.rejectRootDirectoryDelete(owner.getBucket(), isEmptyDir, recursive); } if (!recursive && !isEmptyDir) { @@ -245,15 +209,10 @@ static boolean objectDelete(final OBSFileSystem owner, } if (isEmptyDir) { - LOG.debug( - "delete: Deleting fake empty directory {} - recursive {}", - f, recursive); + LOG.debug("delete: Deleting fake empty directory {} - recursive {}", f, recursive); OBSCommonUtils.deleteObject(owner, key); } else { - LOG.debug( - "delete: Deleting objects for directory prefix {} " - + "- recursive {}", - f, recursive); + LOG.debug("delete: Deleting objects for directory prefix {} " + "- recursive {}", f, recursive); deleteNonEmptyDir(owner, recursive, key); } @@ -277,9 +236,7 @@ static boolean objectDelete(final OBSFileSystem owner, * @param dstKey destination folder key * @throws IOException any problem with rename folder */ - static void renameFolder(final OBSFileSystem owner, final String srcKey, - final String dstKey) - throws IOException { + static void renameFolder(final OBSFileSystem owner, final String srcKey, final String dstKey) throws IOException { long startTime = System.nanoTime(); List keysToDelete = new ArrayList<>(); @@ -302,15 +259,18 @@ static void renameFolder(final OBSFileSystem owner, final String srcKey, } keysToDelete.add(new KeyAndVersion(summary.getObjectKey())); - String newDstKey = dstKey + summary.getObjectKey() - .substring(srcKey.length()); + String newDstKey = dstKey + summary.getObjectKey().substring(srcKey.length()); copyfutures.add( - copyFileAsync(owner, summary.getObjectKey(), newDstKey, - summary.getMetadata().getContentLength())); + copyFileAsync(owner, summary.getObjectKey(), newDstKey, summary.getMetadata().getContentLength())); if (keysToDelete.size() == owner.getMaxEntriesToDelete()) { waitAllCopyFinished(copyfutures); copyfutures.clear(); + DeleteObjectsRequest deleteObjectsRequest = new DeleteObjectsRequest(owner.getBucket()); + deleteObjectsRequest.setKeyAndVersions( + (KeyAndVersion[]) keysToDelete.toArray(new KeyAndVersion[0])); + OBSCommonUtils.deleteObjects(owner, deleteObjectsRequest); + keysToDelete.clear(); } } @@ -318,6 +278,11 @@ static void renameFolder(final OBSFileSystem owner, final String srcKey, if (!keysToDelete.isEmpty()) { waitAllCopyFinished(copyfutures); copyfutures.clear(); + DeleteObjectsRequest deleteObjectsRequest = new DeleteObjectsRequest(owner.getBucket()); + deleteObjectsRequest.setKeyAndVersions( + (KeyAndVersion[]) keysToDelete.toArray(new KeyAndVersion[0])); + OBSCommonUtils.deleteObjects(owner, deleteObjectsRequest); + keysToDelete.clear(); } break; } @@ -326,44 +291,30 @@ static void renameFolder(final OBSFileSystem owner, final String srcKey, keysToDelete.add(new KeyAndVersion(srcKey)); - DeleteObjectsRequest deleteObjectsRequest = new DeleteObjectsRequest( - owner.getBucket()); - deleteObjectsRequest.setKeyAndVersions( - keysToDelete.toArray(new KeyAndVersion[0])); + DeleteObjectsRequest deleteObjectsRequest = new DeleteObjectsRequest(owner.getBucket()); + deleteObjectsRequest.setKeyAndVersions(keysToDelete.toArray(new KeyAndVersion[0])); OBSCommonUtils.deleteObjects(owner, deleteObjectsRequest); if (LOG.isDebugEnabled()) { long delay = System.nanoTime() - startTime; - LOG.debug( - "OBSFileSystem rename: " - + ", {src=" - + srcKey - + ", dst=" - + dstKey - + ", delay=" - + delay - + "}"); + LOG.debug("OBSFileSystem rename: " + ", {src=" + srcKey + ", dst=" + dstKey + ", delay=" + delay + "}"); } } - private static void waitAllCopyFinished( - final List> copyFutures) - throws IOException { + private static void waitAllCopyFinished(final List> copyFutures) throws IOException { try { for (Future copyFuture : copyFutures) { copyFuture.get(); } } catch (InterruptedException e) { LOG.warn("Interrupted while copying objects (copy)"); - throw new InterruptedIOException( - "Interrupted while copying objects (copy)"); + throw new InterruptedIOException("Interrupted while copying objects (copy)"); } catch (ExecutionException e) { for (Future future : copyFutures) { future.cancel(true); } - throw OBSCommonUtils.extractException( - "waitAllCopyFinished", copyFutures.toString(), e); + throw OBSCommonUtils.extractException("waitAllCopyFinished", copyFutures.toString(), e); } } @@ -374,8 +325,7 @@ private static void waitAllCopyFinished( * @param key key * @return the metadata */ - protected static ObjectMetadata getObjectMetadata(final OBSFileSystem owner, - final String key) { + protected static ObjectMetadata getObjectMetadata(final OBSFileSystem owner, final String key) { GetObjectMetadataRequest request = new GetObjectMetadataRequest(); request.setBucketName(owner.getBucket()); request.setObjectKey(key); @@ -402,11 +352,10 @@ static ObjectMetadata newObjectMetadata(final long length) { return om; } - private static void deleteNonEmptyDir(final OBSFileSystem owner, - final boolean recursive, final String key) throws IOException { + private static void deleteNonEmptyDir(final OBSFileSystem owner, final boolean recursive, final String key) + throws IOException { String delimiter = recursive ? null : "/"; - ListObjectsRequest request = OBSCommonUtils.createListObjectsRequest( - owner, key, delimiter); + ListObjectsRequest request = OBSCommonUtils.createListObjectsRequest(owner, key, delimiter); ObjectListing objects = OBSCommonUtils.listObjects(owner, request); List keys = new ArrayList<>(objects.getObjects().size()); @@ -435,8 +384,7 @@ private static void deleteNonEmptyDir(final OBSFileSystem owner, } } - static void createFakeDirectoryIfNecessary(final OBSFileSystem owner, - final Path f) + static void createFakeDirectoryIfNecessary(final OBSFileSystem owner, final Path f) throws IOException, ObsException { String key = OBSCommonUtils.pathToKey(owner, f); @@ -446,8 +394,7 @@ static void createFakeDirectoryIfNecessary(final OBSFileSystem owner, } } - static void createFakeDirectory(final OBSFileSystem owner, - final String objectName) + static void createFakeDirectory(final OBSFileSystem owner, final String objectName) throws ObsException, IOException { String newObjectName = objectName; newObjectName = OBSCommonUtils.maybeAddTrailingSlash(newObjectName); @@ -455,38 +402,31 @@ static void createFakeDirectory(final OBSFileSystem owner, } // Used to create an empty file that represents an empty directory - static void createEmptyObject(final OBSFileSystem owner, - final String objectName) throws IOException { + static void createEmptyObject(final OBSFileSystem owner, final String objectName) throws IOException { long delayMs; int retryTime = 0; long startTime = System.currentTimeMillis(); - while (System.currentTimeMillis() - startTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + while (System.currentTimeMillis() - startTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { InputStream im = null; try { im = new InputStream() { - @Override - public int read() { - return -1; - } - }; - PutObjectRequest putObjectRequest - = OBSCommonUtils.newPutObjectRequest(owner, objectName, + @Override + public int read() { + return -1; + } + }; + PutObjectRequest putObjectRequest = OBSCommonUtils.newPutObjectRequest(owner, objectName, newObjectMetadata(0L), im); owner.getObsClient().putObject(putObjectRequest); owner.getSchemeStatistics().incrementWriteOps(1); - owner.getSchemeStatistics().incrementBytesWritten( - putObjectRequest.getMetadata().getContentLength()); + owner.getSchemeStatistics().incrementBytesWritten(putObjectRequest.getMetadata().getContentLength()); return; } catch (ObsException e) { - LOG.debug("Delete path failed with [{}], " - + "retry time [{}] - request id [{}] - " - + "error code [{}] - error message [{}]", - e.getResponseCode(), retryTime, e.getErrorRequestId(), + LOG.debug("Delete path failed with [{}], " + "retry time [{}] - request id [{}] - " + + "error code [{}] - error message [{}]", e.getResponseCode(), retryTime, e.getErrorRequestId(), e.getErrorCode(), e.getErrorMessage()); - IOException ioException = OBSCommonUtils.translateException( - "innerCreateEmptyObject", objectName, e); + IOException ioException = OBSCommonUtils.translateException("innerCreateEmptyObject", objectName, e); if (!(ioException instanceof OBSIOException)) { throw ioException; } @@ -519,22 +459,19 @@ public int read() { * @throws InterruptedIOException the operation was interrupted * @throws IOException Other IO problems */ - static void copyFile(final OBSFileSystem owner, final String srcKey, - final String dstKey, final long size) + static void copyFile(final OBSFileSystem owner, final String srcKey, final String dstKey, final long size) throws IOException, InterruptedIOException { long delayMs; int retryTime = 0; long startTime = System.currentTimeMillis(); - while (System.currentTimeMillis() - startTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + while (System.currentTimeMillis() - startTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { try { innerCopyFile(owner, srcKey, dstKey, size); return; } catch (InterruptedIOException e) { throw e; } catch (OBSIOException e) { - String errMsg = String.format("Failed to copy file from %s to " - + "%s with size %s, retry time %s", + String errMsg = String.format("Failed to copy file from %s to " + "%s with size %s, retry time %s", srcKey, dstKey, size, retryTime); LOG.debug(errMsg, e); delayMs = OBSCommonUtils.getSleepTimeInMs(retryTime); @@ -552,89 +489,66 @@ static void copyFile(final OBSFileSystem owner, final String srcKey, } } - private static void innerCopyFile(final OBSFileSystem owner, - final String srcKey, - final String dstKey, final long size) - throws IOException { + private static void innerCopyFile(final OBSFileSystem owner, final String srcKey, final String dstKey, + final long size) throws IOException { LOG.debug("copyFile {} -> {} ", srcKey, dstKey); try { // 100MB per part if (size > owner.getCopyPartSize()) { // initial copy part task - InitiateMultipartUploadRequest request - = new InitiateMultipartUploadRequest(owner.getBucket(), - dstKey); + InitiateMultipartUploadRequest request = new InitiateMultipartUploadRequest(owner.getBucket(), dstKey); request.setAcl(owner.getCannedACL()); if (owner.getSse().isSseCEnable()) { request.setSseCHeader(owner.getSse().getSseCHeader()); } else if (owner.getSse().isSseKmsEnable()) { request.setSseKmsHeader(owner.getSse().getSseKmsHeader()); } - InitiateMultipartUploadResult result = owner.getObsClient() - .initiateMultipartUpload(request); + InitiateMultipartUploadResult result = owner.getObsClient().initiateMultipartUpload(request); final String uploadId = result.getUploadId(); LOG.debug("Multipart copy file, uploadId: {}", uploadId); // count the parts long partCount = calPartCount(owner.getCopyPartSize(), size); - final List partEtags = - getCopyFilePartEtags(owner, srcKey, dstKey, size, uploadId, - partCount); + final List partEtags = getCopyFilePartEtags(owner, srcKey, dstKey, size, uploadId, partCount); // merge the copy parts - CompleteMultipartUploadRequest completeMultipartUploadRequest = - new CompleteMultipartUploadRequest(owner.getBucket(), - dstKey, uploadId, partEtags); - owner.getObsClient() - .completeMultipartUpload(completeMultipartUploadRequest); + CompleteMultipartUploadRequest completeMultipartUploadRequest = new CompleteMultipartUploadRequest( + owner.getBucket(), dstKey, uploadId, partEtags); + owner.getObsClient().completeMultipartUpload(completeMultipartUploadRequest); } else { ObjectMetadata srcom = getObjectMetadata(owner, srcKey); ObjectMetadata dstom = cloneObjectMetadata(srcom); - final CopyObjectRequest copyObjectRequest = - new CopyObjectRequest(owner.getBucket(), srcKey, - owner.getBucket(), dstKey); + final CopyObjectRequest copyObjectRequest = new CopyObjectRequest(owner.getBucket(), srcKey, + owner.getBucket(), dstKey); copyObjectRequest.setAcl(owner.getCannedACL()); copyObjectRequest.setNewObjectMetadata(dstom); if (owner.getSse().isSseCEnable()) { - copyObjectRequest.setSseCHeader( - owner.getSse().getSseCHeader()); - copyObjectRequest.setSseCHeaderSource( - owner.getSse().getSseCHeader()); + copyObjectRequest.setSseCHeader(owner.getSse().getSseCHeader()); + copyObjectRequest.setSseCHeaderSource(owner.getSse().getSseCHeader()); } else if (owner.getSse().isSseKmsEnable()) { - copyObjectRequest.setSseKmsHeader( - owner.getSse().getSseKmsHeader()); + copyObjectRequest.setSseKmsHeader(owner.getSse().getSseKmsHeader()); } owner.getObsClient().copyObject(copyObjectRequest); } owner.getSchemeStatistics().incrementWriteOps(1); } catch (ObsException e) { - throw OBSCommonUtils.translateException( - "copyFile(" + srcKey + ", " + dstKey + ")", srcKey, e); + throw OBSCommonUtils.translateException("copyFile(" + srcKey + ", " + dstKey + ")", srcKey, e); } } static int calPartCount(final long partSize, final long cloudSize) { // get user setting of per copy part size ,default is 100MB // calculate the part count - long partCount = cloudSize % partSize == 0 - ? cloudSize / partSize - : cloudSize / partSize + 1; + long partCount = cloudSize % partSize == 0 ? cloudSize / partSize : cloudSize / partSize + 1; return (int) partCount; } - static List getCopyFilePartEtags(final OBSFileSystem owner, - final String srcKey, - final String dstKey, - final long objectSize, - final String uploadId, - final long partCount) - throws IOException { - final List partEtags = Collections.synchronizedList( - new ArrayList<>()); + static List getCopyFilePartEtags(final OBSFileSystem owner, final String srcKey, final String dstKey, + final long objectSize, final String uploadId, final long partCount) throws IOException { + final List partEtags = Collections.synchronizedList(new ArrayList<>()); final List> partCopyFutures = new ArrayList<>(); - submitCopyPartTasks(owner, srcKey, dstKey, objectSize, uploadId, - partCount, partEtags, partCopyFutures); + submitCopyPartTasks(owner, srcKey, dstKey, objectSize, uploadId, partCount, partEtags, partCopyFutures); // wait the tasks for completing try { @@ -643,8 +557,7 @@ static List getCopyFilePartEtags(final OBSFileSystem owner, } } catch (InterruptedException e) { LOG.warn("Interrupted while copying objects (copy)"); - throw new InterruptedIOException( - "Interrupted while copying objects (copy)"); + throw new InterruptedIOException("Interrupted while copying objects (copy)"); } catch (ExecutionException e) { LOG.error("Multipart copy file exception.", e); for (Future future : partCopyFutures) { @@ -652,13 +565,10 @@ static List getCopyFilePartEtags(final OBSFileSystem owner, } owner.getObsClient() - .abortMultipartUpload( - new AbortMultipartUploadRequest(owner.getBucket(), dstKey, - uploadId)); + .abortMultipartUpload(new AbortMultipartUploadRequest(owner.getBucket(), dstKey, uploadId)); throw OBSCommonUtils.extractException( - "Multi-part copy with id '" + uploadId + "' from " + srcKey - + "to " + dstKey, dstKey, e); + "Multi-part copy with id '" + uploadId + "' from " + srcKey + "to " + dstKey, dstKey, e); } // Make part numbers in ascending order @@ -667,45 +577,31 @@ static List getCopyFilePartEtags(final OBSFileSystem owner, } @SuppressWarnings("checkstyle:ParameterNumber") - private static void submitCopyPartTasks(final OBSFileSystem owner, - final String srcKey, - final String dstKey, - final long objectSize, - final String uploadId, - final long partCount, - final List partEtags, + private static void submitCopyPartTasks(final OBSFileSystem owner, final String srcKey, final String dstKey, + final long objectSize, final String uploadId, final long partCount, final List partEtags, final List> partCopyFutures) { for (int i = 0; i < partCount; i++) { final long rangeStart = i * owner.getCopyPartSize(); - final long rangeEnd = (i + 1 == partCount) - ? objectSize - 1 - : rangeStart + owner.getCopyPartSize() - 1; + final long rangeEnd = (i + 1 == partCount) ? objectSize - 1 : rangeStart + owner.getCopyPartSize() - 1; final int partNumber = i + 1; - partCopyFutures.add( - owner.getBoundedCopyPartThreadPool().submit(() -> { - CopyPartRequest request = new CopyPartRequest(); - request.setUploadId(uploadId); - request.setSourceBucketName(owner.getBucket()); - request.setSourceObjectKey(srcKey); - request.setDestinationBucketName(owner.getBucket()); - request.setDestinationObjectKey(dstKey); - request.setByteRangeStart(rangeStart); - request.setByteRangeEnd(rangeEnd); - request.setPartNumber(partNumber); - if (owner.getSse().isSseCEnable()) { - request.setSseCHeaderSource( - owner.getSse().getSseCHeader()); - request.setSseCHeaderDestination( - owner.getSse().getSseCHeader()); - } - CopyPartResult result = owner.getObsClient() - .copyPart(request); - partEtags.add( - new PartEtag(result.getEtag(), result.getPartNumber())); - LOG.debug( - "Multipart copy file, uploadId: {}, Part#{} done.", - uploadId, partNumber); - })); + partCopyFutures.add(owner.getBoundedCopyPartThreadPool().submit(() -> { + CopyPartRequest request = new CopyPartRequest(); + request.setUploadId(uploadId); + request.setSourceBucketName(owner.getBucket()); + request.setSourceObjectKey(srcKey); + request.setDestinationBucketName(owner.getBucket()); + request.setDestinationObjectKey(dstKey); + request.setByteRangeStart(rangeStart); + request.setByteRangeEnd(rangeEnd); + request.setPartNumber(partNumber); + if (owner.getSse().isSseCEnable()) { + request.setSseCHeaderSource(owner.getSse().getSseCHeader()); + request.setSseCHeaderDestination(owner.getSse().getSseCHeader()); + } + CopyPartResult result = owner.getObsClient().copyPart(request); + partEtags.add(new PartEtag(result.getEtag(), result.getPartNumber())); + LOG.debug("Multipart copy file, uploadId: {}, Part#{} done.", uploadId, partNumber); + })); } } @@ -717,8 +613,7 @@ private static void submitCopyPartTasks(final OBSFileSystem owner, * @param source the {@link ObjectMetadata} to copy * @return a copy of {@link ObjectMetadata} with only relevant attributes */ - private static ObjectMetadata cloneObjectMetadata( - final ObjectMetadata source) { + private static ObjectMetadata cloneObjectMetadata(final ObjectMetadata source) { // This approach may be too brittle, especially if // in future there are new attributes added to ObjectMetadata // that we do not explicitly call to set here @@ -730,9 +625,7 @@ private static ObjectMetadata cloneObjectMetadata( return ret; } - static OBSFileStatus innerGetObjectStatus(final OBSFileSystem owner, - final Path f) - throws IOException { + static OBSFileStatus innerGetObjectStatus(final OBSFileSystem owner, final Path f) throws IOException { final Path path = OBSCommonUtils.qualify(owner, f); String key = OBSCommonUtils.pathToKey(owner, path); LOG.debug("Getting path status for {} ({})", path, key); @@ -740,21 +633,17 @@ static OBSFileStatus innerGetObjectStatus(final OBSFileSystem owner, try { ObjectMetadata meta = getObjectMetadata(owner, key); - if (OBSCommonUtils.objectRepresentsDirectory(key, - meta.getContentLength())) { + if (OBSCommonUtils.objectRepresentsDirectory(key, meta.getContentLength())) { LOG.debug("Found exact file: fake directory"); return new OBSFileStatus(path, owner.getShortUserName()); } else { LOG.debug("Found exact file: normal file"); - return new OBSFileStatus(meta.getContentLength(), - OBSCommonUtils.dateToLong(meta.getLastModified()), - path, owner.getDefaultBlockSize(path), - owner.getShortUserName()); + return new OBSFileStatus(meta.getContentLength(), OBSCommonUtils.dateToLong(meta.getLastModified()), + path, owner.getDefaultBlockSize(path), owner.getShortUserName()); } } catch (ObsException e) { if (e.getResponseCode() != OBSCommonUtils.NOT_FOUND_CODE) { - throw OBSCommonUtils.translateException("getFileStatus", - path, e); + throw OBSCommonUtils.translateException("getFileStatus", path, e); } } @@ -763,26 +652,19 @@ static OBSFileStatus innerGetObjectStatus(final OBSFileSystem owner, try { ObjectMetadata meta = getObjectMetadata(owner, newKey); - if (OBSCommonUtils.objectRepresentsDirectory(newKey, - meta.getContentLength())) { + if (OBSCommonUtils.objectRepresentsDirectory(newKey, meta.getContentLength())) { LOG.debug("Found file (with /): fake directory"); return new OBSFileStatus(path, owner.getShortUserName()); } else { - LOG.debug( - "Found file (with /): real file? should not " - + "happen: {}", - key); + LOG.debug("Found file (with /): real file? should not " + "happen: {}", key); return new OBSFileStatus(meta.getContentLength(), - OBSCommonUtils.dateToLong(meta.getLastModified()), - path, - owner.getDefaultBlockSize(path), + OBSCommonUtils.dateToLong(meta.getLastModified()), path, owner.getDefaultBlockSize(path), owner.getShortUserName()); } } catch (ObsException e) { if (e.getResponseCode() != OBSCommonUtils.NOT_FOUND_CODE) { - throw OBSCommonUtils.translateException("getFileStatus", - newKey, e); + throw OBSCommonUtils.translateException("getFileStatus", newKey, e); } } } @@ -794,8 +676,7 @@ static OBSFileStatus innerGetObjectStatus(final OBSFileSystem owner, return new OBSFileStatus(path, owner.getShortUserName()); } catch (ObsException e) { if (e.getResponseCode() != OBSCommonUtils.NOT_FOUND_CODE) { - throw OBSCommonUtils.translateException("getFileStatus", key, - e); + throw OBSCommonUtils.translateException("getFileStatus", key, e); } } @@ -803,8 +684,7 @@ static OBSFileStatus innerGetObjectStatus(final OBSFileSystem owner, throw new FileNotFoundException("No such file or directory: " + path); } - static ContentSummary getDirectoryContentSummary(final OBSFileSystem owner, - final String key) throws IOException { + static ContentSummary getDirectoryContentSummary(final OBSFileSystem owner, final String key) throws IOException { String newKey = key; newKey = OBSCommonUtils.maybeAddTrailingSlash(newKey); long[] summary = {0, 0, 1}; @@ -816,11 +696,9 @@ static ContentSummary getDirectoryContentSummary(final OBSFileSystem owner, request.setMaxKeys(owner.getMaxKeys()); ObjectListing objects = OBSCommonUtils.listObjects(owner, request); while (true) { - if (!objects.getCommonPrefixes().isEmpty() || !objects.getObjects() - .isEmpty()) { + if (!objects.getCommonPrefixes().isEmpty() || !objects.getObjects().isEmpty()) { if (LOG.isDebugEnabled()) { - LOG.debug("Found path as directory (with /): {}/{}", - objects.getCommonPrefixes().size(), + LOG.debug("Found path as directory (with /): {}/{}", objects.getCommonPrefixes().size(), objects.getObjects().size()); } for (String prefix : objects.getCommonPrefixes()) { @@ -829,8 +707,7 @@ static ContentSummary getDirectoryContentSummary(final OBSFileSystem owner, } for (ObsObject obj : objects.getObjects()) { - LOG.debug("Summary: {} {}", obj.getObjectKey(), - obj.getMetadata().getContentLength()); + LOG.debug("Summary: {} {}", obj.getObjectKey(), obj.getMetadata().getContentLength()); if (!obj.getObjectKey().endsWith("/")) { summary[0] += obj.getMetadata().getContentLength(); summary[1] += 1; @@ -844,18 +721,17 @@ static ContentSummary getDirectoryContentSummary(final OBSFileSystem owner, objects = OBSCommonUtils.continueListObjects(owner, objects); } summary[2] += directories.size(); - LOG.debug(String.format( - "file size [%d] - file count [%d] - directory count [%d] - " - + "file path [%s]", - summary[0], - summary[1], summary[2], newKey)); + LOG.debug( + String.format("file size [%d] - file count [%d] - directory count [%d] - " + "file path [%s]", summary[0], + summary[1], summary[2], newKey)); return new ContentSummary.Builder().length(summary[0]) - .fileCount(summary[1]).directoryCount(summary[2]) - .spaceConsumed(summary[0]).build(); + .fileCount(summary[1]) + .directoryCount(summary[2]) + .spaceConsumed(summary[0]) + .build(); } - private static void getDirectories(final String key, final String sourceKey, - final Set directories) { + private static void getDirectories(final String key, final String sourceKey, final Set directories) { Path p = new Path(key); Path sourcePath = new Path(sourceKey); // directory must add first @@ -871,9 +747,7 @@ private static void getDirectories(final String key, final String sourceKey, } } - private static Future copyFileAsync( - final OBSFileSystem owner, - final String srcKey, + private static Future copyFileAsync(final OBSFileSystem owner, final String srcKey, final String dstKey, final long size) { return owner.getBoundedCopyThreadPool().submit(() -> { copyFile(owner, srcKey, dstKey, size); diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSPosixBucketUtils.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSPosixBucketUtils.java index 2d3cca0..c641a4d 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSPosixBucketUtils.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSPosixBucketUtils.java @@ -33,8 +33,7 @@ final class OBSPosixBucketUtils { /** * Class logger. */ - private static final Logger LOG = LoggerFactory.getLogger( - OBSPosixBucketUtils.class); + private static final Logger LOG = LoggerFactory.getLogger(OBSPosixBucketUtils.class); private OBSPosixBucketUtils() { } @@ -73,8 +72,7 @@ static boolean fsIsFolder(final ObsFSAttribute attr) { * pretty vague. * @throws IOException on IO failure. */ - static boolean renameBasedOnPosix(final OBSFileSystem owner, final Path src, - final Path dst) throws IOException { + static boolean renameBasedOnPosix(final OBSFileSystem owner, final Path src, final Path dst) throws IOException { Path dstPath = dst; String srcKey = OBSCommonUtils.pathToKey(owner, src); String dstKey = OBSCommonUtils.pathToKey(owner, dstPath); @@ -85,37 +83,24 @@ static boolean renameBasedOnPosix(final OBSFileSystem owner, final Path src, } try { - FileStatus dstStatus = OBSCommonUtils.innerGetFileStatusWithRetry( - owner, - dstPath); + FileStatus dstStatus = OBSCommonUtils.innerGetFileStatusWithRetry(owner, dstPath); if (dstStatus.isDirectory()) { - String newDstString = OBSCommonUtils.maybeAddTrailingSlash( - dstPath.toString()); - String filename = srcKey.substring( - OBSCommonUtils.pathToKey(owner, src.getParent()) - .length() + 1); + String newDstString = OBSCommonUtils.maybeAddTrailingSlash(dstPath.toString()); + String filename = srcKey.substring(OBSCommonUtils.pathToKey(owner, src.getParent()).length() + 1); dstPath = new Path(newDstString + filename); dstKey = OBSCommonUtils.pathToKey(owner, dstPath); - LOG.debug( - "rename: dest is an existing directory and will be " - + "changed to [{}]", dstPath); + LOG.debug("rename: dest is an existing directory and will be " + "changed to [{}]", dstPath); if (owner.exists(dstPath)) { - LOG.error("rename: failed to rename " + src + " to " - + dstPath - + " because destination exists"); + LOG.error("rename: failed to rename " + src + " to " + dstPath + " because destination exists"); return false; } } else { if (srcKey.equals(dstKey)) { - LOG.warn( - "rename: src and dest refer to the same " - + "file or directory: {}", dstPath); + LOG.warn("rename: src and dest refer to the same " + "file or directory: {}", dstPath); return true; } else { - LOG.error("rename: failed to rename " + src + " to " - + dstPath - + " because destination exists"); + LOG.error("rename: failed to rename " + src + " to " + dstPath + " because destination exists"); return false; } } @@ -127,21 +112,17 @@ static boolean renameBasedOnPosix(final OBSFileSystem owner, final Path src, throw new ParentNotDirectoryException(e.getMessage()); } - if (dstKey.startsWith(srcKey) - && (dstKey.equals(srcKey) + if (dstKey.startsWith(srcKey) && (dstKey.equals(srcKey) || dstKey.charAt(srcKey.length()) == Path.SEPARATOR_CHAR)) { - LOG.error("rename: dest [{}] cannot be a descendant of src [{}]", - dstPath, src); + LOG.error("rename: dest [{}] cannot be a descendant of src [{}]", dstPath, src); return false; } return innerFsRenameWithRetry(owner, src, dstPath, srcKey, dstKey); } - static boolean innerFsRenameWithRetry(final OBSFileSystem owner, - final Path src, - final Path dst, final String srcKey, final String dstKey) - throws IOException { + static boolean innerFsRenameWithRetry(final OBSFileSystem owner, final Path src, final Path dst, + final String srcKey, final String dstKey) throws IOException { String newSrcKey = srcKey; String newDstKey = dstKey; IOException lastException; @@ -149,24 +130,18 @@ static boolean innerFsRenameWithRetry(final OBSFileSystem owner, int retryTime = 0; long startTime = System.currentTimeMillis(); do { - boolean isRegularDirPath = - newSrcKey.endsWith("/") && newDstKey.endsWith("/"); + boolean isRegularDirPath = newSrcKey.endsWith("/") && newDstKey.endsWith("/"); try { - LOG.debug("rename: {}-st rename from [{}] to [{}] ...", - retryTime, newSrcKey, newDstKey); + LOG.debug("rename: {}-st rename from [{}] to [{}] ...", retryTime, newSrcKey, newDstKey); innerFsRenameFile(owner, newSrcKey, newDstKey); return true; } catch (FileNotFoundException e) { if (owner.exists(dst)) { - LOG.debug( - "rename: successfully {}-st rename src [{}] " - + "to dest [{}] with SDK retry", - retryTime, src, dst, e); + LOG.debug("rename: successfully {}-st rename src [{}] " + "to dest [{}] with SDK retry", retryTime, + src, dst, e); return true; } else { - LOG.error( - "rename: failed {}-st rename src [{}] to dest [{}]", - retryTime, src, dst, e); + LOG.error("rename: failed {}-st rename src [{}] to dest [{}]", retryTime, src, dst, e); return false; } } catch (IOException e) { @@ -175,13 +150,10 @@ static boolean innerFsRenameWithRetry(final OBSFileSystem owner, } try { - FileStatus srcFileStatus = - OBSCommonUtils.innerGetFileStatusWithRetry(owner, src); + FileStatus srcFileStatus = OBSCommonUtils.innerGetFileStatusWithRetry(owner, src); if (srcFileStatus.isDirectory()) { - newSrcKey = OBSCommonUtils.maybeAddTrailingSlash( - newSrcKey); - newDstKey = OBSCommonUtils.maybeAddTrailingSlash( - newDstKey); + newSrcKey = OBSCommonUtils.maybeAddTrailingSlash(newSrcKey); + newDstKey = OBSCommonUtils.maybeAddTrailingSlash(newDstKey); } else if (e instanceof AccessControlException) { throw e; } @@ -190,13 +162,9 @@ static boolean innerFsRenameWithRetry(final OBSFileSystem owner, } lastException = e; - LOG.warn( - "rename: failed {}-st rename src [{}] to dest [{}]", - retryTime, src, dst, e); + LOG.warn("rename: failed {}-st rename src [{}] to dest [{}]", retryTime, src, dst, e); if (owner.exists(dst) && owner.exists(src)) { - LOG.warn( - "rename: failed {}-st rename src [{}] to " - + "dest [{}] with SDK retry", retryTime, src, + LOG.warn("rename: failed {}-st rename src [{}] to " + "dest [{}] with SDK retry", retryTime, src, dst, e); return false; } @@ -212,18 +180,14 @@ static boolean innerFsRenameWithRetry(final OBSFileSystem owner, } } } - } while (System.currentTimeMillis() - startTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY); + } while (System.currentTimeMillis() - startTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY); - LOG.error( - "rename: failed {}-st rename src [{}] to dest [{}]", - retryTime, src, dst, lastException); + LOG.error("rename: failed {}-st rename src [{}] to dest [{}]", retryTime, src, dst, lastException); throw lastException; } - static void innerFsRenameFile(final OBSFileSystem owner, - final String srcKey, - final String dstKey) throws IOException { + static void innerFsRenameFile(final OBSFileSystem owner, final String srcKey, final String dstKey) + throws IOException { LOG.debug("RenameFile path {} to {}", srcKey, dstKey); try { @@ -234,8 +198,7 @@ static void innerFsRenameFile(final OBSFileSystem owner, owner.getObsClient().renameFile(renameObjectRequest); owner.getSchemeStatistics().incrementWriteOps(1); } catch (ObsException e) { - throw OBSCommonUtils.translateException( - "renameFile(" + srcKey + ", " + dstKey + ")", srcKey, e); + throw OBSCommonUtils.translateException("renameFile(" + srcKey + ", " + dstKey + ")", srcKey, e); } } @@ -248,25 +211,18 @@ static void innerFsRenameFile(final OBSFileSystem owner, * @param dstKey destination object key * @throws IOException io exception */ - static void fsRenameToNewObject(final OBSFileSystem owner, - final String srcKey, - final String dstKey) throws IOException { + static boolean fsRenameToNewObject(final OBSFileSystem owner, final String srcKey, final String dstKey) + throws IOException { String newSrcKey = srcKey; String newdstKey = dstKey; newSrcKey = OBSCommonUtils.maybeDeleteBeginningSlash(newSrcKey); newdstKey = OBSCommonUtils.maybeDeleteBeginningSlash(newdstKey); - if (!renameBasedOnPosix(owner, OBSCommonUtils.keyToPath(newSrcKey), - OBSCommonUtils.keyToPath(newdstKey))) { - throw new IOException( - "failed to rename " + newSrcKey + "to" + newdstKey); - } + return renameBasedOnPosix(owner, OBSCommonUtils.keyToPath(newSrcKey), OBSCommonUtils.keyToPath(newdstKey)); } // Delete a file. - private static int fsRemoveFile(final OBSFileSystem owner, - final String sonObjectKey, - final List files) - throws IOException { + private static int fsRemoveFile(final OBSFileSystem owner, final String sonObjectKey, + final List files) throws IOException { files.add(new KeyAndVersion(sonObjectKey)); if (files.size() == owner.getMaxEntriesToDelete()) { // batch delete files. @@ -277,8 +233,7 @@ private static int fsRemoveFile(final OBSFileSystem owner, } // Recursively delete a folder that might be not empty. - static boolean fsDelete(final OBSFileSystem owner, final FileStatus status, - final boolean recursive) + static boolean fsDelete(final OBSFileSystem owner, final FileStatus status, final boolean recursive) throws IOException, ObsException { long startTime = System.currentTimeMillis(); long threadId = Thread.currentThread().getId(); @@ -289,97 +244,168 @@ static boolean fsDelete(final OBSFileSystem owner, final FileStatus status, LOG.debug("delete: Path is a file"); trashObjectIfNeed(owner, key); } else { - LOG.debug("delete: Path is a directory: {} - recursive {}", f, - recursive); + LOG.debug("delete: Path is a directory: {} - recursive {}", f, recursive); key = OBSCommonUtils.maybeAddTrailingSlash(key); boolean isEmptyDir = OBSCommonUtils.isFolderEmpty(owner, key); if (key.equals("")) { - return OBSCommonUtils.rejectRootDirectoryDelete( - owner.getBucket(), isEmptyDir, recursive); + return OBSCommonUtils.rejectRootDirectoryDelete(owner.getBucket(), isEmptyDir, recursive); } if (!recursive && !isEmptyDir) { - LOG.warn("delete: Path is not empty: {} - recursive {}", f, - recursive); + LOG.warn("delete: Path is not empty: {} - recursive {}", f, recursive); throw new PathIsNotEmptyDirectoryException(f.toString()); } if (isEmptyDir) { - LOG.debug( - "delete: Deleting fake empty directory {} - recursive {}", - f, recursive); - OBSCommonUtils.deleteObject(owner, key); + LOG.debug("delete: Deleting fake empty directory {} - recursive {}", f, recursive); + try { + OBSCommonUtils.deleteObject(owner, key); + } catch (FileConflictException e) { + LOG.warn("delete emtryDir[{}] has conflict exception, " + + "will retry.", key, e); + trashFolderIfNeed(owner, key); + } } else { - LOG.debug( - "delete: Deleting objects for directory prefix {} to " - + "delete - recursive {}", f, recursive); + LOG.debug("delete: Deleting objects for directory prefix {} to " + "delete - recursive {}", f, + recursive); trashFolderIfNeed(owner, key); } } long endTime = System.currentTimeMillis(); - LOG.debug("delete Path:{} thread:{}, timeUsedInMilliSec:{}", f, - threadId, endTime - startTime); + LOG.debug("delete Path:{} thread:{}, timeUsedInMilliSec:{}", f, threadId, endTime - startTime); return true; } - private static void trashObjectIfNeed(final OBSFileSystem owner, - final String key) + private static void trashObjectIfNeed(final OBSFileSystem owner, final String key) throws ObsException, IOException { - if (needToTrash(owner, key)) { - mkTrash(owner, key); - StringBuilder sb = new StringBuilder(owner.getTrashDir()); - sb.append(key); - if (owner.exists(new Path(sb.toString()))) { - SimpleDateFormat df = new SimpleDateFormat("-yyyyMMddHHmmss"); - sb.append(df.format(new Date())); - } - fsRenameToNewObject(owner, key, sb.toString()); - LOG.debug("Moved: '" + key + "' to trash at: " + sb.toString()); - } else { + if (!needToTrash(owner, key)) { OBSCommonUtils.deleteObject(owner, key); + return; + } + + mkTrash(owner, key); + String destKeyWithNoSuffix = owner.getTrashDir() + key; + String destKey = destKeyWithNoSuffix; + SimpleDateFormat df = new SimpleDateFormat("-yyyyMMddHHmmssSSS"); + if (owner.exists(new Path(destKey))) { + destKey = destKeyWithNoSuffix + df.format(new Date()); + } + // add timestamp when rename failed to avoid multi clients rename sources to the same target + long startTime = System.currentTimeMillis(); + int retryTime = 0; + long delayMs; + while (!fsRenameToNewObject(owner, key, destKey)) { + LOG.debug("Move file [{}] to [{}] failed, retryTime[{}].", key, + destKey, retryTime); + + delayMs = OBSCommonUtils.getSleepTimeInMs(retryTime); + if (System.currentTimeMillis() - startTime + delayMs + > OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + LOG.error("Failed rename file [{}] to [{}] after " + + "retryTime[{}].", key, destKey, retryTime); + throw new IOException("Failed to rename " + key + " to " + destKey); + } else { + try { + Thread.sleep(delayMs); + } catch (InterruptedException ie) { + LOG.error("Failed rename file [{}] to [{}] after " + + "retryTime[{}].", key, destKey, retryTime); + throw new IOException("Failed to rename " + key + " to " + destKey); + } + } + destKey = destKeyWithNoSuffix + df.format(new Date()); + retryTime++; } + LOG.debug("Moved file : '" + key + "' to trash at: " + destKey); } - private static void trashFolderIfNeed(final OBSFileSystem owner, - final String key) throws ObsException, IOException { - if (needToTrash(owner, key)) { - mkTrash(owner, key); - StringBuilder sb = new StringBuilder(owner.getTrashDir()); - String subKey = OBSCommonUtils.maybeAddTrailingSlash(key); - sb.append(subKey); - if (owner.exists(new Path(sb.toString()))) { - SimpleDateFormat df = new SimpleDateFormat("-yyyyMMddHHmmss"); - sb.insert(sb.length() - 1, df.format(new Date())); + private static void trashFolderIfNeed(final OBSFileSystem owner, final String key) + throws ObsException, IOException { + if (!needToTrash(owner, key)) { + fsRecursivelyDeleteDirWithRetry(owner, key, true); + return; + } + + mkTrash(owner, key); + StringBuilder sb = new StringBuilder(owner.getTrashDir()); + SimpleDateFormat df = new SimpleDateFormat("-yyyyMMddHHmmssSSS"); + int endIndex = key.endsWith("/") ? key.length() - 1 : key.length(); + sb.append(key, 0, endIndex); + String destKeyWithNoSuffix = sb.toString(); + String destKey = destKeyWithNoSuffix; + if (owner.exists(new Path(sb.toString()))) { + destKey = destKeyWithNoSuffix + df.format(new Date()); + } + + String srcKey = OBSCommonUtils.maybeAddTrailingSlash(key); + String dstKey = OBSCommonUtils.maybeAddTrailingSlash(destKey); + + // add timestamp when rename failed to avoid multi clients rename sources to the same target + long startTime = System.currentTimeMillis(); + int retryTime = 0; + long delayMs; + while (!fsRenameToNewObject(owner, srcKey, dstKey)) { + LOG.debug("Move folder [{}] to [{}] failed, retryTime[{}].", key, + destKey, retryTime); + + delayMs = OBSCommonUtils.getSleepTimeInMs(retryTime); + if (System.currentTimeMillis() - startTime + delayMs + > OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + LOG.error("Failed rename folder [{}] to [{}] after " + + "retryTime[{}].", key, destKey, retryTime); + throw new IOException("Failed to rename " + key + " to " + destKey); + } else { + try { + Thread.sleep(delayMs); + } catch (InterruptedException ie) { + LOG.error("Failed rename folder [{}] to [{}] after " + + "retryTime[{}].", key, destKey, retryTime); + throw new IOException("Failed to rename " + key + " to " + destKey); + } } + destKey = destKeyWithNoSuffix + df.format(new Date()); + dstKey = OBSCommonUtils.maybeAddTrailingSlash(destKey); + retryTime++; + } + LOG.debug("Moved folder : '" + key + "' to trash at: " + destKey); + } - String srcKey = OBSCommonUtils.maybeDeleteBeginningSlash(key); - srcKey = OBSCommonUtils.maybeAddTrailingSlash(srcKey); - String dstKey = OBSCommonUtils.maybeDeleteBeginningSlash( - sb.toString()); - dstKey = OBSCommonUtils.maybeAddTrailingSlash(dstKey); - if (!renameBasedOnPosix(owner, OBSCommonUtils.keyToPath(srcKey), - OBSCommonUtils.keyToPath(dstKey))) { - throw new IOException( - "failed to rename " + srcKey + "to" + dstKey); + static void fsRecursivelyDeleteDirWithRetry(final OBSFileSystem owner, + final String key, boolean deleteParent) throws IOException { + long startTime = System.currentTimeMillis(); + long delayMs; + int retryTime = 0; + while (System.currentTimeMillis() - startTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + try { + long delNum = fsRecursivelyDeleteDir(owner, key, deleteParent); + LOG.debug("Recursively delete {} files/dirs when deleting {}", + delNum, key); + return; + } catch (FileConflictException e) { + LOG.warn("Recursively delete [{}] has conflict exception, " + + "retryTime[{}].", key, e); + delayMs = OBSCommonUtils.getSleepTimeInMs(retryTime); + retryTime++; + if (System.currentTimeMillis() - startTime + delayMs + < OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + try { + Thread.sleep(delayMs); + } catch (InterruptedException ie) { + throw e; + } + } } - LOG.debug("Moved: '" + key + "' to trash at: " + sb.toString()); - } else { - long delNum = fsRecursivelyDeleteDir(owner, key, true); - LOG.debug("Recursively delete {} files/dirs when deleting {}", - delNum, key); } + + fsRecursivelyDeleteDir(owner, key, deleteParent); } - static long fsRecursivelyDeleteDir(final OBSFileSystem owner, - final String parentKey, - final boolean deleteParent) throws IOException { + static long fsRecursivelyDeleteDir(final OBSFileSystem owner, final String parentKey, final boolean deleteParent) + throws IOException { long delNum = 0; - List subdirList = new ArrayList<>( - owner.getMaxEntriesToDelete()); - List fileList = new ArrayList<>( - owner.getMaxEntriesToDelete()); + List subdirList = new ArrayList<>(owner.getMaxEntriesToDelete()); + List fileList = new ArrayList<>(owner.getMaxEntriesToDelete()); - ListObjectsRequest request = OBSCommonUtils.createListObjectsRequest( - owner, parentKey, "/", owner.getMaxKeys()); + ListObjectsRequest request = OBSCommonUtils.createListObjectsRequest(owner, parentKey, "/", owner.getMaxKeys()); ObjectListing objects = OBSCommonUtils.listObjects(owner, request); while (true) { for (String commonPrefix : objects.getCommonPrefixes()) { @@ -427,13 +453,11 @@ static long fsRecursivelyDeleteDir(final OBSFileSystem owner, return delNum; } - private static boolean needToTrash(final OBSFileSystem owner, - final String key) { + private static boolean needToTrash(final OBSFileSystem owner, final String key) { String newKey = key; newKey = OBSCommonUtils.maybeDeleteBeginningSlash(newKey); if (owner.isEnableTrash()) { - String trashPathKey = OBSCommonUtils.pathToKey(owner, - new Path(owner.getTrashDir())); + String trashPathKey = OBSCommonUtils.pathToKey(owner, new Path(owner.getTrashDir())); if (newKey.startsWith(trashPathKey)) { return false; } @@ -442,11 +466,9 @@ private static boolean needToTrash(final OBSFileSystem owner, } // Delete a sub dir. - private static int fsRemoveSubdir(final OBSFileSystem owner, - final String subdirKey, - final List subdirList) - throws IOException { - fsRecursivelyDeleteDir(owner, subdirKey, false); + private static int fsRemoveSubdir(final OBSFileSystem owner, final String subdirKey, + final List subdirList) throws IOException { + fsRecursivelyDeleteDirWithRetry(owner, subdirKey, false); subdirList.add(new KeyAndVersion(subdirKey)); if (subdirList.size() == owner.getMaxEntriesToDelete()) { @@ -458,8 +480,7 @@ private static int fsRemoveSubdir(final OBSFileSystem owner, return 0; } - private static void mkTrash(final OBSFileSystem owner, final String key) - throws ObsException, IOException { + private static void mkTrash(final OBSFileSystem owner, final String key) throws ObsException, IOException { String newKey = key; StringBuilder sb = new StringBuilder(owner.getTrashDir()); newKey = OBSCommonUtils.maybeAddTrailingSlash(newKey); @@ -474,31 +495,26 @@ private static void mkTrash(final OBSFileSystem owner, final String key) } // Used to create a folder - static void fsCreateFolder(final OBSFileSystem owner, - final String objectName) - throws IOException { + static void fsCreateFolder(final OBSFileSystem owner, final String objectName) throws IOException { String newObjectName = OBSCommonUtils.maybeAddTrailingSlash(objectName); - final NewFolderRequest newFolderRequest = new NewFolderRequest( - owner.getBucket(), newObjectName); + final NewFolderRequest newFolderRequest = new NewFolderRequest(owner.getBucket(), newObjectName); newFolderRequest.setAcl(owner.getCannedACL()); long len = newFolderRequest.getObjectKey().length(); long delayMs; int retryTime = 0; long startTime = System.currentTimeMillis(); - while (System.currentTimeMillis() - startTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + while (System.currentTimeMillis() - startTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { try { owner.getObsClient().newFolder(newFolderRequest); owner.getSchemeStatistics().incrementWriteOps(1); owner.getSchemeStatistics().incrementBytesWritten(len); return; } catch (ObsException e) { - LOG.debug("Failed to create folder [{}], retry time [{}], " - + "exception [{}]", newObjectName, retryTime, e); + LOG.debug("Failed to create folder [{}], retry time [{}], " + "exception [{}]", newObjectName, + retryTime, e); - IOException ioException = OBSCommonUtils.translateException( - "innerFsCreateFolder", newObjectName, e); + IOException ioException = OBSCommonUtils.translateException("innerFsCreateFolder", newObjectName, e); if (!(ioException instanceof OBSIOException)) { throw ioException; } @@ -518,8 +534,7 @@ static void fsCreateFolder(final OBSFileSystem owner, } // Used to get the status of a file or folder in a file-gateway bucket. - static OBSFileStatus innerFsGetObjectStatus(final OBSFileSystem owner, - final Path f) throws IOException { + static OBSFileStatus innerFsGetObjectStatus(final OBSFileSystem owner, final Path f) throws IOException { final Path path = OBSCommonUtils.qualify(owner, f); String key = OBSCommonUtils.pathToKey(owner, path); LOG.debug("Getting path status for {} ({})", path, key); @@ -530,35 +545,24 @@ static OBSFileStatus innerFsGetObjectStatus(final OBSFileSystem owner, } try { - final GetAttributeRequest getAttrRequest = new GetAttributeRequest( - owner.getBucket(), key); - ObsFSAttribute meta = owner.getObsClient() - .getAttribute(getAttrRequest); + final GetAttributeRequest getAttrRequest = new GetAttributeRequest(owner.getBucket(), key); + ObsFSAttribute meta = owner.getObsClient().getAttribute(getAttrRequest); owner.getSchemeStatistics().incrementReadOps(1); if (fsIsFolder(meta)) { LOG.debug("Found file (with /): fake directory"); - return new OBSFileStatus(path, - OBSCommonUtils.dateToLong(meta.getLastModified()), + return new OBSFileStatus(path, OBSCommonUtils.dateToLong(meta.getLastModified()), owner.getShortUserName()); } else { - LOG.debug( - "Found file (with /): real file? should not happen: {}", - key); - return new OBSFileStatus( - meta.getContentLength(), - OBSCommonUtils.dateToLong(meta.getLastModified()), - path, - owner.getDefaultBlockSize(path), - owner.getShortUserName()); + LOG.debug("Found file (with /): real file? should not happen: {}", key); + return new OBSFileStatus(meta.getContentLength(), OBSCommonUtils.dateToLong(meta.getLastModified()), + path, owner.getDefaultBlockSize(path), owner.getShortUserName()); } } catch (ObsException e) { throw OBSCommonUtils.translateException("getFileStatus", path, e); } } - static ContentSummary fsGetDirectoryContentSummary( - final OBSFileSystem owner, - final String key) throws IOException { + static ContentSummary fsGetDirectoryContentSummary(final OBSFileSystem owner, final String key) throws IOException { String newKey = key; newKey = OBSCommonUtils.maybeAddTrailingSlash(newKey); long[] summary = {0, 0, 1}; @@ -569,11 +573,9 @@ static ContentSummary fsGetDirectoryContentSummary( request.setMaxKeys(owner.getMaxKeys()); ObjectListing objects = OBSCommonUtils.listObjects(owner, request); while (true) { - if (!objects.getCommonPrefixes().isEmpty() || !objects.getObjects() - .isEmpty()) { + if (!objects.getCommonPrefixes().isEmpty() || !objects.getObjects().isEmpty()) { if (LOG.isDebugEnabled()) { - LOG.debug("Found path as directory (with /): {}/{}", - objects.getCommonPrefixes().size(), + LOG.debug("Found path as directory (with /): {}/{}", objects.getCommonPrefixes().size(), objects.getObjects().size()); } for (String prefix : objects.getCommonPrefixes()) { @@ -596,30 +598,29 @@ static ContentSummary fsGetDirectoryContentSummary( } objects = OBSCommonUtils.continueListObjects(owner, objects); } - LOG.debug(String.format( - "file size [%d] - file count [%d] - directory count [%d] - " - + "file path [%s]", - summary[0], summary[1], summary[2], newKey)); + LOG.debug( + String.format("file size [%d] - file count [%d] - directory count [%d] - " + "file path [%s]", summary[0], + summary[1], summary[2], newKey)); return new ContentSummary.Builder().length(summary[0]) - .fileCount(summary[1]).directoryCount(summary[2]) - .spaceConsumed(summary[0]).build(); + .fileCount(summary[1]) + .directoryCount(summary[2]) + .spaceConsumed(summary[0]) + .build(); } - static void innerFsTruncateWithRetry(final OBSFileSystem owner, - final Path f, final long newLength) + static void innerFsTruncateWithRetry(final OBSFileSystem owner, final Path f, final long newLength) throws IOException { long delayMs; int retryTime = 0; long startTime = System.currentTimeMillis(); - while (System.currentTimeMillis() - startTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + while (System.currentTimeMillis() - startTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { try { innerFsTruncate(owner, f, newLength); return; } catch (OBSIOException e) { - OBSFileSystem.LOG.debug("Failed to truncate [{}] to newLength" - + " [{}], retry time [{}], exception [{}]", f, - newLength, retryTime, e); + OBSFileSystem.LOG.debug( + "Failed to truncate [{}] to newLength" + " [{}], retry time [{}], exception [{}]", f, newLength, + retryTime, e); delayMs = OBSCommonUtils.getSleepTimeInMs(retryTime); retryTime++; @@ -634,14 +635,13 @@ static void innerFsTruncateWithRetry(final OBSFileSystem owner, innerFsTruncate(owner, f, newLength); } - private static void innerFsTruncate(final OBSFileSystem owner, - final Path f, final long newLength) throws IOException { + private static void innerFsTruncate(final OBSFileSystem owner, final Path f, final long newLength) + throws IOException { LOG.debug("truncate {} to newLength {}", f, newLength); try { String key = OBSCommonUtils.pathToKey(owner, f); - owner.getObsClient() - .truncateObject(owner.getBucket(), key, newLength); + owner.getObsClient().truncateObject(owner.getBucket(), key, newLength); owner.getSchemeStatistics().incrementWriteOps(1); } catch (ObsException e) { throw OBSCommonUtils.translateException("truncate", f, e); diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSReadaheadInputStream.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSReadaheadInputStream.java deleted file mode 100644 index e8a0f20..0000000 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSReadaheadInputStream.java +++ /dev/null @@ -1,691 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.obs; - -import com.google.common.annotations.VisibleForTesting; -import com.google.common.base.Preconditions; -import com.obs.services.ObsClient; -import com.obs.services.exception.ObsException; -import org.apache.commons.lang.StringUtils; -import org.apache.hadoop.classification.InterfaceAudience; -import org.apache.hadoop.classification.InterfaceStability; -import org.apache.hadoop.fs.CanSetReadahead; -import org.apache.hadoop.fs.FSExceptionMessages; -import org.apache.hadoop.fs.FSInputStream; -import org.apache.hadoop.fs.FileSystem; -import org.slf4j.Logger; - -import java.io.EOFException; -import java.io.IOException; -import java.util.Deque; -import java.util.Iterator; -import java.util.LinkedList; -import java.util.concurrent.ExecutionException; -import java.util.concurrent.Future; -import java.util.concurrent.ThreadPoolExecutor; - -import static org.apache.hadoop.fs.obs.OBSUtils.translateException; - -/** - * The input stream for an OBS object. - * - *

As this stream seeks withing an object, it may close then re-open the stream. When this - * happens, any updated stream data may be retrieved, and, given the consistency model of Huawei - * OBS, outdated data may in fact be picked up. - * - *

As a result, the outcome of reading from a stream of an object which is actively manipulated - * during the read process is "undefined". - * - *

The class is marked as private as code should not be creating instances themselves. Any extra - * feature (e.g instrumentation) should be considered unstable. - * - *

Because it prints some of the state of the instrumentation, the output of {@link #toString()} - * must also be considered unstable. - */ -@InterfaceAudience.Private -@InterfaceStability.Evolving -public class OBSReadaheadInputStream extends FSInputStream implements CanSetReadahead { - - public static final Logger LOG = OBSFileSystem.LOG; - private final FileSystem.Statistics statistics; - private final ObsClient client; - private final String bucket; - private final String key; - private final long contentLength; - private final String uri; - private final OBSInputPolicy inputPolicy; - private final long MAX_RANGE; - /** - * Closed bit. Volatile so reads are non-blocking. Updates must be in a synchronized block to - * guarantee an atomic check and set - */ - private volatile boolean closed; - - private long readahead = Constants.DEFAULT_READAHEAD_RANGE; - private ThreadPoolExecutor readThreadPool; - private int bufferPartSize; - private Deque buffers = new LinkedList<>(); - private int readPartRemain; - private byte[] buffer; - private long bufferStart; - /** - * This is the actual position within the object, used by lazy seek to decide whether to seek on - * the next read or not. - */ - private long nextReadPos; - - /** - * The end of the content range of the last request. This is an absolute value of the range, not a - * length field. - */ - private long contentRangeFinish; - - /** The start of the content range of the last request. */ - private long contentRangeStart; - - private OBSFileSystem fs; - - public OBSReadaheadInputStream( - String bucket, - String key, - long contentLength, - ObsClient client, - FileSystem.Statistics stats, - long readahead, - OBSInputPolicy inputPolicy, - ThreadPoolExecutor readThreadPool, - int bufferPartSize, - long maxRange, - OBSFileSystem fs) { - Preconditions.checkArgument(StringUtils.isNotEmpty(bucket), "No Bucket"); - Preconditions.checkArgument(StringUtils.isNotEmpty(key), "No Key"); - Preconditions.checkArgument(contentLength >= 0, "Negative content length"); - this.bucket = bucket; - this.key = key; - this.contentLength = contentLength; - this.client = client; - this.statistics = stats; - this.uri = "obs://" + this.bucket + "/" + this.key; - this.inputPolicy = inputPolicy; - setReadahead(readahead); - // Thread pool for read - this.readThreadPool = readThreadPool; - // Buffer size for each part - this.bufferPartSize = bufferPartSize; - this.MAX_RANGE = maxRange; - nextReadPos = 0; - bufferStart = -1; - this.fs = fs; - } - - /** - * Calculate the limit for a get request, based on input policy and state of object. - * - * @param inputPolicy input policy - * @param targetPos position of the read - * @param length length of bytes requested; if less than zero "unknown" - * @param contentLength total length of file - * @param readahead current readahead value - * @return the absolute value of the limit of the request. - */ - static long calculateRequestLimit( - OBSInputPolicy inputPolicy, long targetPos, long length, long contentLength, long readahead) { - long rangeLimit; - - rangeLimit = contentLength; - - // cannot read past the end of the object - rangeLimit = Math.min(contentLength, rangeLimit); - return rangeLimit; - } - - /** - * @param targetPos target position to reopen - * @param length object total length , or partial length from object start - * @param append if append is true, use buffers.offer() , else buffers.offerFirst() - */ - private synchronized void applyBuffersWithinRange(long targetPos, long length, boolean append) { - - if (targetPos >= contentRangeFinish) { - // Protect - return; - } - if (length > contentRangeFinish) { - // range protect - length = contentRangeFinish; - } - boolean randomReadBuffered = false; - Deque tmpBuffers = new LinkedList<>(); - while (length - targetPos > bufferPartSize) { - final long tmpEndPos = targetPos + bufferPartSize - 1; - final long tmpTargetPos = targetPos; - ReadBuffer readBuffer = new ReadBuffer(tmpTargetPos, tmpEndPos); - - Future task = readThreadPool.submit(new MultiReadTask(fs, bucket, key, client, readBuffer)); - readBuffer.setTask(task); - tmpBuffers.offer(readBuffer); - targetPos += bufferPartSize; - } - - // last incomplete buffer - final long tmpEndPos = length - 1; - final long tmpTargetPos = targetPos; - if (!randomReadBuffered) { - // Random read only once - ReadBuffer readBuffer = new ReadBuffer(tmpTargetPos, tmpEndPos); - if (readBuffer.getBuffer().length == 0) { - // EOF - readBuffer.setState(ReadBuffer.STATE.FINISH); - } else { - Future task = readThreadPool.submit(new MultiReadTask(fs, bucket, key, client, readBuffer)); - readBuffer.setTask(task); - } - tmpBuffers.offer(readBuffer); - } - - // Depends on append - if (append) { - while (!tmpBuffers.isEmpty()) { - buffers.offer(tmpBuffers.poll()); - } - } else { - while (!tmpBuffers.isEmpty()) { - // use tmp buffer as stack - buffers.offerFirst(tmpBuffers.pollLast()); - } - } - } - - private synchronized void closeAndClearBuffers() { - for (ReadBuffer buffer : buffers) { - // Interrupt all tasks - if (buffer.getTask() != null) { - buffer.getTask().cancel(true); - } - } - // clean buffers - buffers.clear(); - } - - /** - * Opens up the stream at specified target position and for given length. - * - * @param reason reason for reopen - * @param targetPos target position - * @param length length requested - * @throws IOException on any failure to open the object - */ - private synchronized void reopen(String reason, long targetPos, long length) throws IOException { - - if (targetPos < 0) { - throw new IOException("io exception"); - } - - contentRangeFinish = - calculateRequestLimit(inputPolicy, targetPos, length, contentLength, readahead); - LOG.debug( - "reopen({}) for {} range[{}-{}], length={}," + " , nextReadPosition={}", - uri, - reason, - targetPos, - contentRangeFinish, - length, - nextReadPos); - contentRangeStart = targetPos; - - boolean bufferExist = false; - // Reopen when already have buffers - // Buffers is considered continuous - while (buffers.size() != 0) { - // Peek first buffer - ReadBuffer buffer = buffers.peek(); - ReadBuffer lastbuffer = buffers.peekLast(); - if (buffer.getStart() <= targetPos && targetPos <= buffer.getEnd()) { - bufferExist = true; - // Good, no need to clear buffers - break; - } - if (targetPos < buffer.getStart()) { - // Target at left of first buffer - Iterator iterator = buffers.descendingIterator(); - while (iterator.hasNext()) { - ReadBuffer buf = iterator.next(); - if (buf.getEnd() - targetPos > MAX_RANGE) { - // DO remove - if (buf.getTask() != null) { - buf.getTask().cancel(true); - } - iterator.remove(); - } else { - // End immediately - break; - } - } - // Peek after delete - buffer = buffers.peek(); - lastbuffer = buffers.peekLast(); - - if (buffer == null) { - // applyBuffersWithinRange(targetPos,buffer.getStart(),false); - applyBuffersWithinRange(targetPos, targetPos + MAX_RANGE, false); - } else { - if (lastbuffer.getEnd() - targetPos != MAX_RANGE) { - applyBuffersWithinRange(lastbuffer.getEnd() + 1, targetPos + MAX_RANGE, true); - } - applyBuffersWithinRange(targetPos, buffer.getStart(), false); - } - bufferExist = true; - break; - } else if (targetPos > lastbuffer.getEnd()) { - // close all tasks and clear buffers - closeAndClearBuffers(); - } else { - // Exceed current buffer, but within all buffers - // buffers.poll(); - Iterator iterator = buffers.iterator(); - while (iterator.hasNext()) { - ReadBuffer buf = iterator.next(); - if (buf.getEnd() < targetPos) { - // DO remove - if (buf.getTask() != null) { - buf.getTask().cancel(true); - } - iterator.remove(); - } else { - // End immediately - break; - } - } - - // Peek after delete - buffer = buffers.peek(); - lastbuffer = buffers.peekLast(); - - if (lastbuffer.getEnd() < buffer.getStart() + MAX_RANGE) { - // Still room for new buffer, append - applyBuffersWithinRange(lastbuffer.getEnd() + 1, buffer.getStart() + MAX_RANGE, true); - } - bufferExist = true; - break; - - // continue - } - } - - try { - if (!bufferExist) { - applyBuffersWithinRange(targetPos, targetPos + MAX_RANGE, false); - } - - // Consume the first buffer - ReadBuffer readBuffer = buffers.peek(); - - if (readBuffer == null) { - this.buffer = null; - this.bufferStart = -1; - this.readPartRemain = 0; - throw new IOException("exception null buffer"); - } - - try { - // Wait until state change turn different from START - readBuffer.getTask().get(); - // readBuffer.await(ReadBuffer.STATE.START); - if (ReadBuffer.STATE.ERROR.equals(readBuffer.getState())) { - // ERROR occurred while requesting - this.buffer = null; - this.readPartRemain = 0; - this.bufferStart = -1; - } - this.buffer = readBuffer.getBuffer(); - this.readPartRemain = (int) (readBuffer.getEnd() - targetPos + 1); - this.bufferStart = readBuffer.getStart(); - } catch (InterruptedException e) { - LOG.warn("Interrupted waiting for reading data"); - } catch (ExecutionException e) { - // ERROR occurred while requesting - this.buffer = null; - this.readPartRemain = 0; - this.bufferStart = -1; - LOG.warn("Execute get buffer task fail cause: ", e.getCause()); - } - } catch (ObsException e) { - throw translateException("Reopen at position " + targetPos, uri, e); - } - } - - @Override - public synchronized long getPos() throws IOException { - return (nextReadPos < 0) ? 0 : nextReadPos; - } - - @Override - public synchronized void seek(long targetPos) throws IOException { - checkNotClosed(); - - // Do not allow negative seek - if (targetPos < 0) { - throw new EOFException(FSExceptionMessages.NEGATIVE_SEEK + " " + targetPos); - } - - if (this.contentLength <= 0) { - return; - } - // LOG.warn("Seek key: "+key+" targetPos: "+targetPos +" current nextReadPos: "+nextReadPos +" - // current partRemain: "+readPartRemain); - if (bufferStart != -1) { - // Must have buffer - long bufferEnd = bufferStart + buffer.length - 1; - if (targetPos >= bufferStart && targetPos <= bufferEnd) { - // Within buffer - // move right - // LOG.warn("Seek right inside buffer targetPos: "+targetPos +" current nextReadPos: - // "+nextReadPos +" current partRemain: "+readPartRemain); - readPartRemain = (int) (bufferEnd - targetPos + 1); - } else { - readPartRemain = 0; - } - } - - // Lazy seek - - nextReadPos = targetPos; - } - - /** - * Seek without raising any exception. This is for use in {@code finally} clauses - * - * @param positiveTargetPos a target position which must be positive. - */ - private void seekQuietly(long positiveTargetPos) { - try { - seek(positiveTargetPos); - } catch (IOException ioe) { - LOG.debug("Ignoring IOE on seek of {} to {}", uri, positiveTargetPos, ioe); - } - } - - @Override - public boolean seekToNewSource(long targetPos) throws IOException { - return false; - } - - @Override - public synchronized int read() throws IOException { - checkNotClosed(); - if (this.contentLength == 0 || (nextReadPos >= contentLength)) { - return -1; - } - - if (readPartRemain <= 0 && nextReadPos < contentLength) { - reopen("open", nextReadPos, contentLength); - } - - int byteRead = -1; - if (readPartRemain != 0) { - // Get first byte from buffer - byteRead = this.buffer[this.buffer.length - readPartRemain] & 0xFF; - } - - if (byteRead >= 0) { - // remove byte - nextReadPos++; - readPartRemain--; - } - if (statistics != null && byteRead >= 0) { - statistics.incrementBytesRead(byteRead); - } - - return byteRead; - } - - /** - * {@inheritDoc} - * - *

This updates the statistics on read operations started and whether or not the read operation - * "completed", that is: returned the exact number of bytes requested. - * - * @throws IOException if there are other problems - */ - @Override - public synchronized int read(byte[] buf, int off, int len) throws IOException { - - // LOG.warn("---read--- key: "+key+" buflength: "+buf.length+"off: "+off+"len: "+len); - checkNotClosed(); - validatePositionedReadArgs(nextReadPos, buf, off, len); - if (len == 0) { - return 0; - } - - if (this.contentLength == 0 || (nextReadPos >= contentLength)) { - // LOG.warn("Read exceed!nextPos: "+nextReadPos+" contentLength: "+contentLength); - return -1; - } - - long bytescount = 0; - while (nextReadPos < contentLength && bytescount < len) { - if (readPartRemain == 0) { - // LOG.warn("reopen , nextReadPos: "+nextReadPos+"length: "+(len-bytescount)); - reopen("continue buffer read", nextReadPos, len - bytescount); - } - - int bytes = 0; - for (int i = this.buffer.length - readPartRemain; i < this.buffer.length; i++) { - /*try {*/ - buf[(int) (off + bytescount)] = this.buffer[i]; - /* } catch (ArrayIndexOutOfBoundsException e){ - throw new ArrayIndexOutOfBoundsException("buffer size is "+this.buffer.length+" remain part: "+readPartRemain+" index:"+i); - }*/ - bytes++; - bytescount++; - if (bytescount >= len) { - // LOG.warn("bufcopy break off: "+off+" bytescount: "+bytescount +" len: "+len+" - // bytes:"+bytes); - break; - } - } - if (bytes > 0) { - nextReadPos += bytes; - readPartRemain -= bytes; - } else if (readPartRemain != 0) { - // not fully read from stream - throw new IOException("Sfailed to read , remain :" + readPartRemain); - } - } - if (statistics != null && bytescount > 0) { - statistics.incrementBytesRead(bytescount); - } - if (bytescount == 0 && len > 0) { - // LOG.warn("Read normally finished! key"+key+" nextPos: "+nextReadPos+" contentLength: - // "+contentLength); - return -1; - } else { - // LOG.warn("Read normally finished key"+key+" with bytescount: "+bytescount+"! nextPos: - // "+nextReadPos+" contentLength: "+contentLength); - return (int) (bytescount); - } - } - - /** - * Verify that the input stream is open. Non blocking; this gives the last state of the volatile - * {@link #closed} field. - * - * @throws IOException if the connection is closed. - */ - private void checkNotClosed() throws IOException { - if (closed) { - throw new IOException(uri + ": " + FSExceptionMessages.STREAM_IS_CLOSED); - } - } - - /** - * Close the stream. This triggers publishing of the stream statistics back to the filesystem - * statistics. This operation is synchronized, so that only one thread can attempt to close the - * connection; all later/blocked calls are no-ops. - * - * @throws IOException on any problem - */ - @Override - public synchronized void close() throws IOException { - // LOG.warn("Closed ! key"+key); - if (!closed) { - - closed = true; - closeAndClearBuffers(); - - // close or abort the stream - // closeStream("close() operation", this.contentRangeFinish, false); - // this is actually a no-op - super.close(); - } - } - - @Override - public synchronized int available() throws IOException { - checkNotClosed(); - - long remaining = remainingInFile(); - if (remaining > Integer.MAX_VALUE) { - return Integer.MAX_VALUE; - } - return (int) remaining; - } - - /** - * Bytes left in stream. - * - * @return how many bytes are left to read - */ - @InterfaceAudience.Private - @InterfaceStability.Unstable - public synchronized long remainingInFile() { - return this.contentLength - this.nextReadPos; - } - - /** - * Bytes left in the current request. Only valid if there is an active request. - * - * @return how many bytes are left to read in the current GET. - */ - @InterfaceAudience.Private - @InterfaceStability.Unstable - public synchronized long remainingInCurrentRequest() { - return this.contentRangeFinish - this.nextReadPos; - } - - @InterfaceAudience.Private - @InterfaceStability.Unstable - public synchronized long getContentRangeFinish() { - return contentRangeFinish; - } - - @InterfaceAudience.Private - @InterfaceStability.Unstable - public synchronized long getContentRangeStart() { - return contentRangeStart; - } - - @Override - public boolean markSupported() { - return false; - } - - /** - * String value includes statistics as well as stream state. Important: there are no guarantees - * as to the stability of this value. - * - * @return a string value for printing in logs/diagnostics - */ - @Override - @InterfaceStability.Unstable - public String toString() { - synchronized (this) { - final StringBuilder sb = new StringBuilder("OBSReadaheadInputStream{"); - sb.append(uri); - sb.append(" read policy=").append(inputPolicy); - sb.append(" nextReadPos=").append(nextReadPos); - sb.append(" contentLength=").append(contentLength); - sb.append(" contentRangeStart=").append(contentRangeStart); - sb.append(" contentRangeFinish=").append(contentRangeFinish); - sb.append(" remainingInCurrentRequest=").append(remainingInCurrentRequest()); - sb.append('\n'); - sb.append('}'); - return sb.toString(); - } - } - - /** - * Subclass {@code readFully()} operation which only seeks at the start of the series of - * operations; seeking back at the end. - * - *

This is significantly higher performance if multiple read attempts are needed to fetch the - * data, as it does not break the HTTP connection. - * - *

To maintain thread safety requirements, this operation is synchronized for the duration of - * the sequence. {@inheritDoc} - */ - @Override - public void readFully(long position, byte[] buffer, int offset, int length) throws IOException { - checkNotClosed(); - validatePositionedReadArgs(position, buffer, offset, length); - if (length == 0) { - return; - } - int nread = 0; - synchronized (this) { - long oldPos = getPos(); - try { - seek(position); - while (nread < length) { - int nbytes = read(buffer, offset + nread, length - nread); - if (nbytes < 0) { - throw new EOFException(FSExceptionMessages.EOF_IN_READ_FULLY); - } - nread += nbytes; - } - } finally { - seekQuietly(oldPos); - } - } - } - - /** - * Get the current readahead value. - * - * @return a non-negative readahead value - */ - public synchronized long getReadahead() { - return readahead; - } - - @Override - public synchronized void setReadahead(Long readahead) { - if (readahead == null) { - this.readahead = Constants.DEFAULT_READAHEAD_RANGE; - } else { - Preconditions.checkArgument(readahead >= 0, "Negative readahead value"); - this.readahead = readahead; - } - } - - @VisibleForTesting - public Deque getBuffers() { - return buffers; - } -} diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSUtils.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSUtils.java deleted file mode 100644 index 094d428..0000000 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSUtils.java +++ /dev/null @@ -1,523 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.obs; - -import com.google.common.base.Preconditions; -import com.google.common.collect.Lists; -import com.obs.services.exception.ObsException; -import com.obs.services.model.ObsObject; -import org.apache.commons.lang.StringUtils; -import org.apache.hadoop.classification.InterfaceAudience; -import org.apache.hadoop.classification.InterfaceStability; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.fs.Path; -import org.apache.hadoop.security.ProviderUtils; -import org.slf4j.Logger; - -import java.io.EOFException; -import java.io.FileNotFoundException; -import java.io.IOException; -import java.lang.reflect.Constructor; -import java.lang.reflect.Method; -import java.lang.reflect.Modifier; -import java.net.URI; -import java.nio.file.AccessDeniedException; -import java.util.Collection; -import java.util.Date; -import java.util.List; -import java.util.Map; -import java.util.concurrent.ExecutionException; - -import static org.apache.hadoop.fs.obs.Constants.*; - -/** Utility methods for OBS code. */ -@InterfaceAudience.Private -@InterfaceStability.Evolving -public final class OBSUtils { - - /** - * Core property for provider path. Duplicated here for consistent code across Hadoop version: - * {@value}. - */ - static final String CREDENTIAL_PROVIDER_PATH = "hadoop.security.credential.provider.path"; - /** Reuse the OBSFileSystem log. */ - private static final Logger LOG = OBSFileSystem.LOG; - - private OBSUtils() {} - - /** - * Extract an exception from a failed future, and convert to an IOE. - * - * @param operation operation which failed - * @param path path operated on (may be null) - * @param ee execution exception - * @return an IOE which can be thrown - */ - public static IOException extractException(String operation, String path, ExecutionException ee) { - IOException ioe; - Throwable cause = ee.getCause(); - if (cause instanceof ObsException) { - ioe = translateException(operation, path, (ObsException) cause); - } else if (cause instanceof IOException) { - ioe = (IOException) cause; - } else { - ioe = new IOException(operation + " failed: " + cause, cause); - } - return ioe; - } - - /** - * Create a files status instance from a listing. - * - * @param keyPath path to entry - * @param summary summary from OBS - * @param blockSize block size to declare. - * @param owner owner of the file - * @return a status entry - */ - public static OBSFileStatus createFileStatus( - Path keyPath, ObsObject summary, long blockSize, String owner) { - if (objectRepresentsDirectory( - summary.getObjectKey(), summary.getMetadata().getContentLength())) { - return new OBSFileStatus(true, keyPath, owner); - } else { - return new OBSFileStatus( - summary.getMetadata().getContentLength(), - dateToLong(summary.getMetadata().getLastModified()), - keyPath, - blockSize, - owner); - } - } - - /** - * Predicate: does the object represent a directory?. - * - * @param name object name - * @param size object size - * @return true if it meets the criteria for being an object - */ - public static boolean objectRepresentsDirectory(final String name, final long size) { - return !name.isEmpty() && name.charAt(name.length() - 1) == '/' && size == 0L; - } - - /** - * Date to long conversion. Handles null Dates that can be returned by OBS by returning 0 - * - * @param date date from OBS query - * @return timestamp of the object - */ - public static long dateToLong(final Date date) { - if (date == null) { - return 0L; - } - - return date.getTime(); - } - - /** - * Return the access key and secret for OBS API use. Credentials may exist in configuration, - * within credential providers or indicated in the UserInfo of the name URI param. - * - * @param name the URI for which we need the access keys. - * @param conf the Configuration object to interrogate for keys. - * @return OBSAccessKeys - * @throws IOException problems retrieving passwords from KMS. - */ - public static OBSLoginHelper.Login getOBSAccessKeys(URI name, Configuration conf) - throws IOException { - OBSLoginHelper.Login login = OBSLoginHelper.extractLoginDetailsWithWarnings(name); - Configuration c = - ProviderUtils.excludeIncompatibleCredentialProviders(conf, OBSFileSystem.class); - String accessKey = getPassword(c, ACCESS_KEY, login.getUser()); - String secretKey = getPassword(c, SECRET_KEY, login.getPassword()); - String sessionToken = getPassword(c, SESSION_TOKEN, login.getToken()); - return new OBSLoginHelper.Login(accessKey, secretKey, sessionToken); - } - - /** - * Get a password from a configuration, or, if a value is passed in, pick that up instead. - * - * @param conf configuration - * @param key key to look up - * @param val current value: if non empty this is used instead of querying the configuration. - * @return a password or "". - * @throws IOException on any problem - */ - static String getPassword(Configuration conf, String key, String val) throws IOException { - return StringUtils.isEmpty(val) ? lookupPassword(conf, key, "") : val; - } - - /** - * Get a password from a configuration/configured credential providers. - * - * @param conf configuration - * @param key key to look up - * @param defVal value to return if there is no password - * @return a password or the value in {@code defVal} - * @throws IOException on any problem - */ - static String lookupPassword(Configuration conf, String key, String defVal) throws IOException { - try { - final char[] pass = conf.getPassword(key); - return pass != null ? new String(pass).trim() : defVal; - } catch (IOException ioe) { - throw new IOException("Cannot find password option " + key, ioe); - } - } - - /** - * String information about a summary entry for debug messages. - * - * @param summary summary object - * @return string value - */ - public static String stringify(ObsObject summary) { - StringBuilder builder = new StringBuilder(summary.getObjectKey().length() + 100); - builder.append(summary.getObjectKey()).append(' '); - builder.append("size=").append(summary.getMetadata().getContentLength()); - return builder.toString(); - } - - /** - * Get a integer option >= the minimum allowed value. - * - * @param conf configuration - * @param key key to look up - * @param defVal default value - * @param min minimum value - * @return the value - * @throws IllegalArgumentException if the value is below the minimum - */ - static int intOption(Configuration conf, String key, int defVal, int min) { - int v = conf.getInt(key, defVal); - Preconditions.checkArgument( - v >= min, String.format("Value of %s: %d is below the minimum value %d", key, v, min)); - LOG.debug("Value of {} is {}", key, v); - return v; - } - - /** - * Get a long option >= the minimum allowed value. - * - * @param conf configuration - * @param key key to look up - * @param defVal default value - * @param min minimum value - * @return the value - * @throws IllegalArgumentException if the value is below the minimum - */ - static long longOption(Configuration conf, String key, long defVal, long min) { - long v = conf.getLong(key, defVal); - Preconditions.checkArgument( - v >= min, String.format("Value of %s: %d is below the minimum value %d", key, v, min)); - LOG.debug("Value of {} is {}", key, v); - return v; - } - - /** - * Get a long option >= the minimum allowed value, supporting memory prefixes K,M,G,T,P. - * - * @param conf configuration - * @param key key to look up - * @param defVal default value - * @param min minimum value - * @return the value - * @throws IllegalArgumentException if the value is below the minimum - */ - static long longBytesOption(Configuration conf, String key, long defVal, long min) { - long v = conf.getLongBytes(key, defVal); - Preconditions.checkArgument( - v >= min, String.format("Value of %s: %d is below the minimum value %d", key, v, min)); - LOG.debug("Value of {} is {}", key, v); - return v; - } - - /** - * Get a size property from the configuration: this property must be at least equal to {@link - * Constants#MULTIPART_MIN_SIZE}. If it is too small, it is rounded up to that minimum, and a - * warning printed. - * - * @param conf configuration - * @param property property name - * @param defVal default value - * @return the value, guaranteed to be above the minimum size - */ - public static long getMultipartSizeProperty(Configuration conf, String property, long defVal) { - long partSize = conf.getLongBytes(property, defVal); - if (partSize < MULTIPART_MIN_SIZE) { - LOG.warn("{} must be at least 5 MB; configured value is {}", property, partSize); - partSize = MULTIPART_MIN_SIZE; - } - return partSize; - } - - /** - * Ensure that the long value is in the range of an integer. - * - * @param name property name for error messages - * @param size original size - * @return the size, guaranteed to be less than or equal to the max value of an integer. - */ - public static int ensureOutputParameterInRange(String name, long size) { - if (size > Integer.MAX_VALUE) { - LOG.warn( - "obs: {} capped to ~2.14GB" + " (maximum allowed size with current output mechanism)", - name); - return Integer.MAX_VALUE; - } else { - return (int) size; - } - } - - /** - * Returns the public constructor of {@code cl} specified by the list of {@code args} or {@code - * null} if {@code cl} has no public constructor that matches that specification. - * - * @param cl class - * @param args constructor argument types - * @return constructor or null - */ - private static Constructor getConstructor(Class cl, Class... args) { - try { - Constructor cons = cl.getDeclaredConstructor(args); - return Modifier.isPublic(cons.getModifiers()) ? cons : null; - } catch (NoSuchMethodException | SecurityException e) { - return null; - } - } - - /** - * Returns the public static method of {@code cl} that accepts no arguments and returns {@code - * returnType} specified by {@code methodName} or {@code null} if {@code cl} has no public static - * method that matches that specification. - * - * @param cl class - * @param returnType return type - * @param methodName method name - * @return method or null - */ - private static Method getFactoryMethod(Class cl, Class returnType, String methodName) { - try { - Method m = cl.getDeclaredMethod(methodName); - if (Modifier.isPublic(m.getModifiers()) - && Modifier.isStatic(m.getModifiers()) - && returnType.isAssignableFrom(m.getReturnType())) { - return m; - } else { - return null; - } - } catch (NoSuchMethodException | SecurityException e) { - return null; - } - } - - /** - * Propagates bucket-specific settings into generic OBS configuration keys. This is done by - * propagating the values of the form {@code fs.obs.bucket.${bucket}.key} to {@code fs.obs.key}, - * for all values of "key" other than a small set of unmodifiable values. - * - *

The source of the updated property is set to the key name of the bucket property, to aid in - * diagnostics of where things came from. - * - *

Returns a new configuration. Why the clone? You can use the same conf for different - * filesystems, and the original values are not updated. - * - *

The {@code fs.obs.impl} property cannot be set, nor can any with the prefix {@code - * fs.obs.bucket}. - * - *

This method does not propagate security provider path information from the OBS property into - * the Hadoop common provider: callers must call {@link - * #patchSecurityCredentialProviders(Configuration)} explicitly. - * - * @param source Source Configuration object. - * @param bucket bucket name. Must not be empty. - * @return a (potentially) patched clone of the original. - */ - public static Configuration propagateBucketOptions(Configuration source, String bucket) { - - Preconditions.checkArgument(StringUtils.isNotEmpty(bucket), "bucket"); - final String bucketPrefix = FS_OBS_BUCKET_PREFIX + bucket + '.'; - LOG.debug("Propagating entries under {}", bucketPrefix); - final Configuration dest = new Configuration(source); - for (Map.Entry entry : source) { - final String key = entry.getKey(); - // get the (unexpanded) value. - final String value = entry.getValue(); - if (!key.startsWith(bucketPrefix) || bucketPrefix.equals(key)) { - continue; - } - // there's a bucket prefix, so strip it - final String stripped = key.substring(bucketPrefix.length()); - if (stripped.startsWith("bucket.") || "impl".equals(stripped)) { - // tell user off - LOG.debug("Ignoring bucket option {}", key); - } else { - // propagate the value, building a new origin field. - // to track overwrites, the generic key is overwritten even if - // already matches the new one. - final String generic = FS_OBS_PREFIX + stripped; - LOG.debug("Updating {}", generic); - dest.set(generic, value, key); - } - } - return dest; - } - - /** - * Patch the security credential provider information in {@link #CREDENTIAL_PROVIDER_PATH} with - * the providers listed in {@link Constants#OBS_SECURITY_CREDENTIAL_PROVIDER_PATH}. - * - *

This allows different buckets to use different credential files. - * - * @param conf configuration to patch - */ - static void patchSecurityCredentialProviders(Configuration conf) { - Collection customCredentials = - conf.getStringCollection(OBS_SECURITY_CREDENTIAL_PROVIDER_PATH); - Collection hadoopCredentials = conf.getStringCollection(CREDENTIAL_PROVIDER_PATH); - if (!customCredentials.isEmpty()) { - List all = Lists.newArrayList(customCredentials); - all.addAll(hadoopCredentials); - String joined = StringUtils.join(all, ','); - LOG.debug("Setting {} to {}", CREDENTIAL_PROVIDER_PATH, joined); - conf.set( - CREDENTIAL_PROVIDER_PATH, joined, "patch of " + OBS_SECURITY_CREDENTIAL_PROVIDER_PATH); - } - } - - /** - * Close the Closeable objects and ignore any Exception or null pointers. (This is the - * SLF4J equivalent of that in {@code IOUtils}). - * - * @param log the log to log at debug level. Can be null. - * @param closeables the objects to close - */ - public static void closeAll(Logger log, java.io.Closeable... closeables) { - for (java.io.Closeable c : closeables) { - if (c != null) { - try { - if (log != null) { - log.debug("Closing {}", c); - } - c.close(); - } catch (Exception e) { - if (log != null && log.isDebugEnabled()) { - log.debug("Exception in closing {}", c, e); - } - } - } - } - } - /* - ------------------------------------OBS UTILS------------------------------------------------ - */ - - /** - * Get low level details of a huawei OBS exception for logging; multi-line. - * - * @param e exception - * @return string details - */ - public static String stringify(ObsException e) { - StringBuilder builder = - new StringBuilder( - String.format( - "request id: %s, response code: %d, error code: %s, hostid: %s, message: %s", - e.getErrorRequestId(), - e.getResponseCode(), - e.getErrorCode(), - e.getErrorHostId(), - e.getErrorMessage())); - return builder.toString(); - } - - /** - * Translate an exception raised in an operation into an IOException. HTTP error codes are - * examined and can be used to build a more specific response. - * - * @param operation operation - * @param path path operated on (may be null) - * @param exception amazon exception raised - * @return an IOE which wraps the caught exception. - */ - @SuppressWarnings("ThrowableInstanceNeverThrown") - public static IOException translateException( - String operation, String path, ObsException exception) { - String message = - String.format("%s%s: status [%d] - request id [%s] - error code [%s] - error message [%s] - trace :%s ", operation, path != null ? (" on " + path) : "", - exception.getResponseCode(), exception.getErrorRequestId(), exception.getErrorCode(), exception.getErrorMessage(), exception); - - IOException ioe; - - int status = exception.getResponseCode(); - switch (status) { - case 301: - message = - String.format( - "Received permanent redirect response , status [%d] - request id [%s] - error code [%s] - message [%s]", exception.getResponseCode(), - exception.getErrorRequestId(), exception.getErrorCode(), exception.getErrorMessage()); - ioe = new OBSIOException(message, exception); - break; - // permissions - case 401: - case 403: - ioe = new AccessDeniedException(path, null, message); - ioe.initCause(exception); - break; - - // the object isn't there - case 404: - case 410: - ioe = new FileNotFoundException(message); - ioe.initCause(exception); - break; - - // out of range. This may happen if an object is overwritten with - // a shorter one while it is being read. - case 416: - ioe = new EOFException(message); - break; - - default: - // no specific exit code. Choose an IOE subclass based on the - // class - // of the caught exception - ioe = new OBSIOException(message, exception); - break; - } - return ioe; - } - - /** - * Translate an exception raised in an operation into an IOException. The specific type of - * IOException depends on the class of {@link ObsException} passed in, and any status codes - * included in the operation. That is: HTTP error codes are examined and can be used to build a - * more specific response. - * - * @param operation operation - * @param path path operated on (must not be null) - * @param exception amazon exception raised - * @return an IOE which wraps the caught exception. - */ - public static IOException translateException( - String operation, Path path, ObsException exception) { - return translateException(operation, path.toString(), exception); - } -} diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSWriteOperationHelper.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSWriteOperationHelper.java index 2de1b40..fefab52 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSWriteOperationHelper.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSWriteOperationHelper.java @@ -62,8 +62,7 @@ class OBSWriteOperationHelper { /** * Class logger. */ - public static final Logger LOG = LoggerFactory.getLogger( - OBSWriteOperationHelper.class); + public static final Logger LOG = LoggerFactory.getLogger(OBSWriteOperationHelper.class); /** * Part number of the multipart task. @@ -100,11 +99,8 @@ protected OBSWriteOperationHelper(final OBSFileSystem fs) { * @param length size, if known. Use -1 for not known * @return the request */ - PutObjectRequest newPutRequest(final String destKey, - final InputStream inputStream, - final long length) { - return OBSCommonUtils.newPutObjectRequest(owner, destKey, - newObjectMetadata(length), inputStream); + PutObjectRequest newPutRequest(final String destKey, final InputStream inputStream, final long length) { + return OBSCommonUtils.newPutObjectRequest(owner, destKey, newObjectMetadata(length), inputStream); } /** @@ -114,11 +110,9 @@ PutObjectRequest newPutRequest(final String destKey, * @param sourceFile source file * @return the request */ - PutObjectRequest newPutRequest(final String destKey, - final File sourceFile) { + PutObjectRequest newPutRequest(final String destKey, final File sourceFile) { int length = (int) sourceFile.length(); - return OBSCommonUtils.newPutObjectRequest(owner, destKey, - newObjectMetadata(length), sourceFile); + return OBSCommonUtils.newPutObjectRequest(owner, destKey, newObjectMetadata(length), sourceFile); } /** @@ -150,22 +144,18 @@ public ObjectMetadata newObjectMetadata(final long length) { */ String initiateMultiPartUpload(final String destKey) throws IOException { LOG.debug("Initiating Multipart upload"); - final InitiateMultipartUploadRequest initiateMPURequest = - new InitiateMultipartUploadRequest(bucket, destKey); + final InitiateMultipartUploadRequest initiateMPURequest = new InitiateMultipartUploadRequest(bucket, destKey); initiateMPURequest.setAcl(owner.getCannedACL()); initiateMPURequest.setMetadata(newObjectMetadata(-1)); if (owner.getSse().isSseCEnable()) { initiateMPURequest.setSseCHeader(owner.getSse().getSseCHeader()); } else if (owner.getSse().isSseKmsEnable()) { - initiateMPURequest.setSseKmsHeader( - owner.getSse().getSseKmsHeader()); + initiateMPURequest.setSseKmsHeader(owner.getSse().getSseKmsHeader()); } try { - return obs.initiateMultipartUpload(initiateMPURequest) - .getUploadId(); + return obs.initiateMultipartUpload(initiateMPURequest).getUploadId(); } catch (ObsException ace) { - throw OBSCommonUtils.translateException("Initiate MultiPartUpload", - destKey, ace); + throw OBSCommonUtils.translateException("Initiate MultiPartUpload", destKey, ace); } } @@ -178,21 +168,16 @@ String initiateMultiPartUpload(final String destKey) throws IOException { * @return the result * @throws ObsException on problems. */ - CompleteMultipartUploadResult completeMultipartUpload( - final String destKey, final String uploadId, - final List partETags) - throws ObsException { + CompleteMultipartUploadResult completeMultipartUpload(final String destKey, final String uploadId, + final List partETags) throws ObsException { Preconditions.checkNotNull(uploadId); Preconditions.checkNotNull(partETags); - Preconditions.checkArgument(!partETags.isEmpty(), - "No partitions have been uploaded"); - LOG.debug("Completing multipart upload {} with {} parts", uploadId, - partETags.size()); + Preconditions.checkArgument(!partETags.isEmpty(), "No partitions have been uploaded"); + LOG.debug("Completing multipart upload {} with {} parts", uploadId, partETags.size()); // a copy of the list is required, so that the OBS SDK doesn't // attempt to sort an unmodifiable list. return obs.completeMultipartUpload( - new CompleteMultipartUploadRequest(bucket, destKey, uploadId, - new ArrayList<>(partETags))); + new CompleteMultipartUploadRequest(bucket, destKey, uploadId, new ArrayList<>(partETags))); } /** @@ -202,11 +187,9 @@ CompleteMultipartUploadResult completeMultipartUpload( * @param uploadId multipart operation Id * @throws ObsException on problems. Immediately execute */ - void abortMultipartUpload(final String destKey, final String uploadId) - throws ObsException { + void abortMultipartUpload(final String destKey, final String uploadId) throws ObsException { LOG.debug("Aborting multipart upload {}", uploadId); - obs.abortMultipartUpload( - new AbortMultipartUploadRequest(bucket, destKey, uploadId)); + obs.abortMultipartUpload(new AbortMultipartUploadRequest(bucket, destKey, uploadId)); } /** @@ -219,22 +202,15 @@ void abortMultipartUpload(final String destKey, final String uploadId) * @param sourceFile source file to be uploaded * @return part upload request */ - UploadPartRequest newUploadPartRequest( - final String destKey, - final String uploadId, - final int partNumber, - final int size, - final File sourceFile) { + UploadPartRequest newUploadPartRequest(final String destKey, final String uploadId, final int partNumber, + final int size, final File sourceFile) { Preconditions.checkNotNull(uploadId); Preconditions.checkArgument(sourceFile != null, "Data source"); - Preconditions.checkArgument(size > 0, "Invalid partition size %s", - size); - Preconditions.checkArgument( - partNumber > 0 && partNumber <= PART_NUMBER); + Preconditions.checkArgument(size > 0, "Invalid partition size %s", size); + Preconditions.checkArgument(partNumber > 0 && partNumber <= PART_NUMBER); - LOG.debug("Creating part upload request for {} #{} size {}", uploadId, - partNumber, size); + LOG.debug("Creating part upload request for {} #{} size {}", uploadId, partNumber, size); UploadPartRequest request = new UploadPartRequest(); request.setUploadId(uploadId); request.setBucketName(bucket); @@ -258,22 +234,15 @@ UploadPartRequest newUploadPartRequest( * @param uploadStream upload stream for the part * @return part upload request */ - UploadPartRequest newUploadPartRequest( - final String destKey, - final String uploadId, - final int partNumber, - final int size, - final InputStream uploadStream) { + UploadPartRequest newUploadPartRequest(final String destKey, final String uploadId, final int partNumber, + final int size, final InputStream uploadStream) { Preconditions.checkNotNull(uploadId); Preconditions.checkArgument(uploadStream != null, "Data source"); - Preconditions.checkArgument(size > 0, "Invalid partition size %s", - size); - Preconditions.checkArgument( - partNumber > 0 && partNumber <= PART_NUMBER); + Preconditions.checkArgument(size > 0, "Invalid partition size %s", size); + Preconditions.checkArgument(partNumber > 0 && partNumber <= PART_NUMBER); - LOG.debug("Creating part upload request for {} #{} size {}", uploadId, - partNumber, size); + LOG.debug("Creating part upload request for {} #{} size {}", uploadId, partNumber, size); UploadPartRequest request = new UploadPartRequest(); request.setUploadId(uploadId); request.setBucketName(bucket); @@ -298,13 +267,11 @@ public String toString(final String destKey) { * @return the upload initiated * @throws IOException on problems */ - PutObjectResult putObject(final PutObjectRequest putObjectRequest) - throws IOException { + PutObjectResult putObject(final PutObjectRequest putObjectRequest) throws IOException { try { return OBSCommonUtils.putObjectDirect(owner, putObjectRequest); } catch (ObsException e) { - throw OBSCommonUtils.translateException("put", - putObjectRequest.getObjectKey(), e); + throw OBSCommonUtils.translateException("put", putObjectRequest.getObjectKey(), e); } } } diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/ObsClientFactory.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/ObsClientFactory.java index 906146b..f0cbc3a 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/ObsClientFactory.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/ObsClientFactory.java @@ -18,271 +18,29 @@ package org.apache.hadoop.fs.obs; -import com.obs.services.IObsCredentialsProvider; import com.obs.services.ObsClient; -import com.obs.services.ObsConfiguration; -import org.apache.commons.lang.StringUtils; + import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; -import org.apache.hadoop.conf.Configuration; -import org.apache.hadoop.conf.Configured; -import org.slf4j.Logger; import java.io.IOException; -import java.lang.reflect.Constructor; -import java.lang.reflect.InvocationTargetException; import java.net.URI; -import java.util.Optional; - -import static org.apache.hadoop.fs.obs.Constants.*; -import static org.apache.hadoop.fs.obs.OBSUtils.getOBSAccessKeys; -import static org.apache.hadoop.fs.obs.OBSUtils.intOption; -/** Factory for creation of OBS client instances to be used by {@link }. */ +/** + * Factory for creating OBS client instance to be used by {@link + * OBSFileSystem}. + */ @InterfaceAudience.Private @InterfaceStability.Unstable -interface ObsClientFactory { - /** - * Creates a new {@link ObsClient} client. This method accepts the OBS file system URI both in raw - * input form and validated form as separate arguments, because both values may be useful in - * logging. - * - * @param name raw input OBS file system URI - * @return OBS client - * @throws IOException IO problem - */ - ObsClient createObsClient(URI name) throws IOException; - - /** - * The default factory implementation, which calls the OBS SDK to configure and create an {@link - * ObsClient} that communicates with the OBS service. - */ - static class DefaultObsClientFactory extends Configured implements ObsClientFactory { - - private static final Logger LOG = OBSFileSystem.LOG; - - /** - * Initializes all OBS SDK settings related to connection management. - * - * @param conf Hadoop configuration - * @param obsConf OBS SDK configuration - */ - private static void initConnectionSettings(Configuration conf, ObsConfiguration obsConf) { - - obsConf.setMaxConnections( - intOption(conf, MAXIMUM_CONNECTIONS, DEFAULT_MAXIMUM_CONNECTIONS, 1)); - - boolean secureConnections = conf.getBoolean(SECURE_CONNECTIONS, DEFAULT_SECURE_CONNECTIONS); - - obsConf.setHttpsOnly(secureConnections); - - obsConf.setMaxErrorRetry(intOption(conf, MAX_ERROR_RETRIES, DEFAULT_MAX_ERROR_RETRIES, 0)); - - obsConf.setConnectionTimeout( - intOption(conf, ESTABLISH_TIMEOUT, DEFAULT_ESTABLISH_TIMEOUT, 0)); - - obsConf.setSocketTimeout(intOption(conf, SOCKET_TIMEOUT, DEFAULT_SOCKET_TIMEOUT, 0)); - - obsConf.setIdleConnectionTime( - intOption(conf, IDLE_CONNECTION_TIME, DEFAULT_IDLE_CONNECTION_TIME, 1)); - - obsConf.setMaxIdleConnections( - intOption(conf, MAX_IDLE_CONNECTIONS, DEFAULT_MAX_IDLE_CONNECTIONS, 1)); - - obsConf.setReadBufferSize( - intOption(conf, READ_BUFFER_SIZE, DEFAULT_READ_BUFFER_SIZE, -1)); // to be - // modified - obsConf.setWriteBufferSize( - intOption(conf, WRITE_BUFFER_SIZE, DEFAULT_WRITE_BUFFER_SIZE, -1)); // to be - // modified - obsConf.setUploadStreamRetryBufferSize( - intOption(conf, UPLOAD_STREAM_RETRY_SIZE, DEFAULT_UPLOAD_STREAM_RETRY_SIZE, 1)); - - obsConf.setSocketReadBufferSize( - intOption(conf, SOCKET_RECV_BUFFER, DEFAULT_SOCKET_RECV_BUFFER, -1)); - obsConf.setSocketWriteBufferSize( - intOption(conf, SOCKET_SEND_BUFFER, DEFAULT_SOCKET_SEND_BUFFER, -1)); - - obsConf.setKeepAlive(conf.getBoolean(KEEP_ALIVE, DEFAULT_KEEP_ALIVE)); - obsConf.setValidateCertificate( - conf.getBoolean(VALIDATE_CERTIFICATE, DEFAULT_VALIDATE_CERTIFICATE)); - obsConf.setVerifyResponseContentType( - conf.getBoolean(VERIFY_RESPONSE_CONTENT_TYPE, DEFAULT_VERIFY_RESPONSE_CONTENT_TYPE)); - obsConf.setCname(conf.getBoolean(CNAME, DEFAULT_CNAME)); - obsConf.setIsStrictHostnameVerification( - conf.getBoolean(STRICT_HOSTNAME_VERIFICATION, DEFAULT_STRICT_HOSTNAME_VERIFICATION)); - } - +interface OBSClientFactory { /** - * Initializes OBS SDK proxy support if configured. + * Creates a new {@link ObsClient} client. This method accepts the OBS file + * system URI both in raw input form and validated form as separate + * arguments, because both values may be useful in logging. * - * @param conf Hadoop configuration - * @param obsConf OBS SDK configuration - * @throws IllegalArgumentException if misconfigured + * @param name raw input OBS file system URI + * @return OBS client + * @throws IOException IO problem */ - private static void initProxySupport(Configuration conf, ObsConfiguration obsConf) - throws IllegalArgumentException, IOException { - String proxyHost = conf.getTrimmed(PROXY_HOST, ""); - int proxyPort = conf.getInt(PROXY_PORT, -1); - - if (!proxyHost.isEmpty() && proxyPort < 0) { - if (conf.getBoolean(SECURE_CONNECTIONS, DEFAULT_SECURE_CONNECTIONS)) { - LOG.warn("Proxy host set without port. Using HTTPS default 443"); - obsConf.getHttpProxy().setProxyPort(443); - } else { - LOG.warn("Proxy host set without port. Using HTTP default 80"); - obsConf.getHttpProxy().setProxyPort(80); - } - } - String proxyUsername = conf.getTrimmed(PROXY_USERNAME); - String proxyPassword = null; - char[] proxyPass = conf.getPassword(PROXY_PASSWORD); - if (proxyPass != null) { - proxyPassword = new String(proxyPass).trim(); - } - if ((proxyUsername == null) != (proxyPassword == null)) { - String msg = - "Proxy error: " + PROXY_USERNAME + " or " + PROXY_PASSWORD + " set without the other."; - LOG.error(msg); - throw new IllegalArgumentException(msg); - } - obsConf.setHttpProxy(proxyHost, proxyPort, proxyUsername, proxyPassword); - if (LOG.isDebugEnabled()) { - LOG.debug( - "Using proxy server {}:{} as user {} on " + "domain {} as workstation {}", - obsConf.getHttpProxy().getProxyAddr(), - obsConf.getHttpProxy().getProxyPort(), - String.valueOf(obsConf.getHttpProxy().getProxyUName()), - obsConf.getHttpProxy().getDomain(), - obsConf.getHttpProxy().getWorkstation()); - } - } - - /** - * Creates an {@link ObsClient} from the established configuration. - * - * @param conf Hadoop configuration - * @param obsConf ObsConfiguration - * @param name URL - * @return ObsClient client - */ - private static ObsClient createHuaweiObsClient( - Configuration conf, ObsConfiguration obsConf, URI name) throws IOException { - Class credentialsProviderClass; - BasicSessionCredential credentialsProvider; - ObsClient obsClient = null; - - try { - credentialsProviderClass = conf.getClass(OBS_CREDENTIALS_PROVIDER, null); - } catch (RuntimeException e) { - Throwable c = e.getCause() != null ? e.getCause() : e; - throw new IOException("From option " + OBS_CREDENTIALS_PROVIDER + ' ' + c, c); - } - - if (credentialsProviderClass == null) { - OBSLoginHelper.Login creds = getOBSAccessKeys(name, conf); - - String Ak = creds.getUser(); - String Sk = creds.getPassword(); - String token = creds.getToken(); - - String endPoint = conf.getTrimmed(ENDPOINT, ""); - obsConf.setEndPoint(endPoint); - if (StringUtils.isEmpty(Ak) && StringUtils.isEmpty(Sk)) { - Class securityProviderClass; - IObsCredentialsProvider securityProvider; - try { - securityProviderClass = conf.getClass(OBS_SECURITY_PROVIDER, null); - LOG.info("From option {} get {}", OBS_SECURITY_PROVIDER, securityProviderClass); - } catch (RuntimeException e) { - Throwable c = e.getCause() != null ? e.getCause() : e; - throw new IOException("From option " + OBS_SECURITY_PROVIDER + ' ' + c, c); - } - if (securityProviderClass != null) { - try { - Optional cons = tryGetConstructor(securityProviderClass, - new Class[]{URI.class, Configuration.class}); - - if (cons.isPresent()) { - securityProvider = (IObsCredentialsProvider) cons.get().newInstance(name, conf); - } else { - securityProvider = (IObsCredentialsProvider) securityProviderClass.getDeclaredConstructor().newInstance(); - } - - } catch (NoSuchMethodException | SecurityException e) { - Throwable c = e.getCause() != null ? e.getCause() : e; - throw new IOException("From option " + OBS_SECURITY_PROVIDER + ' ' + c, c); - } catch (IllegalAccessException e) { - Throwable c = e.getCause() != null ? e.getCause() : e; - throw new IOException("From option " + OBS_SECURITY_PROVIDER + ' ' + c, c); - } catch (InstantiationException e) { - Throwable c = e.getCause() != null ? e.getCause() : e; - throw new IOException("From option " + OBS_SECURITY_PROVIDER + ' ' + c, c); - } catch (InvocationTargetException e) { - Throwable c = e.getCause() != null ? e.getCause() : e; - throw new IOException("From option " + OBS_SECURITY_PROVIDER + ' ' + c, c); - } catch (RuntimeException e) { - Throwable c = e.getCause() != null ? e.getCause() : e; - throw new IOException("From option " + OBS_SECURITY_PROVIDER + ' ' + c, c); - } - obsClient = new ObsClient(securityProvider, obsConf); - } else { - obsClient = new ObsClient(Ak, Sk, token, obsConf); - } - } else { - obsClient = new ObsClient(Ak, Sk, token, obsConf); - } - } else { - try { - Constructor cons = - credentialsProviderClass.getDeclaredConstructor(URI.class, Configuration.class); - credentialsProvider = (BasicSessionCredential) cons.newInstance(name, conf); - } catch (NoSuchMethodException | SecurityException e) { - Throwable c = e.getCause() != null ? e.getCause() : e; - throw new IOException("From option " + OBS_CREDENTIALS_PROVIDER + ' ' + c, c); - } catch (IllegalAccessException e) { - Throwable c = e.getCause() != null ? e.getCause() : e; - throw new IOException("From option " + OBS_CREDENTIALS_PROVIDER + ' ' + c, c); - } catch (InstantiationException e) { - Throwable c = e.getCause() != null ? e.getCause() : e; - throw new IOException("From option " + OBS_CREDENTIALS_PROVIDER + ' ' + c, c); - } catch (InvocationTargetException e) { - Throwable c = e.getCause() != null ? e.getCause() : e; - throw new IOException("From option " + OBS_CREDENTIALS_PROVIDER + ' ' + c, c); - } - - String sessionToken = credentialsProvider.getSessionToken(); - String ak = credentialsProvider.getOBSAccessKeyId(); - String sk = credentialsProvider.getOBSSecretKey(); - String endPoint = conf.getTrimmed(ENDPOINT, ""); - obsConf.setEndPoint(endPoint); - if (sessionToken != null && sessionToken.length() != 0) { - obsClient = new ObsClient(ak, sk, sessionToken, obsConf); - } else { - obsClient = new ObsClient(ak, sk, obsConf); - } - } - return obsClient; - } - - public static Optional tryGetConstructor(Class mainClss, Class[] args) { - try { - Constructor constructor = mainClss.getDeclaredConstructor(args); - return Optional.ofNullable(constructor); - } catch (NoSuchMethodException e) { - // ignore - return Optional.empty(); - } - } - - @Override - public ObsClient createObsClient(URI name) throws IOException { - Configuration conf = getConf(); - ObsConfiguration obsConf = new ObsConfiguration(); - initConnectionSettings(conf, obsConf); - initProxySupport(conf, obsConf); - - return createHuaweiObsClient(conf, obsConf, name); - } - } + ObsClient createObsClient(URI name) throws IOException; } diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/Pair.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/Pair.java index 123b8af..586a9a6 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/Pair.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/Pair.java @@ -19,9 +19,14 @@ package org.apache.hadoop.fs.obs; public class Pair { - /** Key. */ + /** + * Key. + */ private final K key; - /** Value. */ + + /** + * Value. + */ private final V value; /** @@ -90,8 +95,9 @@ public boolean equals(Object o) { return false; } else { Pair oP = (Pair) o; - return (key == null ? oP.key == null : key.equals(oP.key)) - && (value == null ? oP.value == null : value.equals(oP.value)); + return (key == null ? oP.key == null : key.equals(oP.key)) && (value == null + ? oP.value == null + : value.equals(oP.value)); } } diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/ReadBuffer.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/ReadBuffer.java deleted file mode 100644 index c3efeb9..0000000 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/ReadBuffer.java +++ /dev/null @@ -1,85 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.obs; - -import java.util.concurrent.Future; - -/** Buffer with state */ -public class ReadBuffer { - - private Future task; - private STATE state; - private long start; - private long end; - private byte[] buffer; - - public ReadBuffer(long start, long end) { - this.start = start; - this.end = end; - this.state = STATE.START; - this.buffer = new byte[(int) (end - start + 1)]; - this.task = null; - } - - public STATE getState() { - return state; - } - - public void setState(STATE state) { - this.state = state; - } - - public long getStart() { - return start; - } - - public void setStart(long start) { - this.start = start; - } - - public long getEnd() { - return end; - } - - public void setEnd(long end) { - this.end = end; - } - - public byte[] getBuffer() { - return buffer; - } - - public void setBuffer(byte[] buffer) { - this.buffer = buffer; - } - - public Future getTask() { - return task; - } - - public void setTask(Future task) { - this.task = task; - } - - enum STATE { - START, - ERROR, - FINISH - } -} diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/RenameFailedException.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/RenameFailedException.java index 3de84d0..a67dbf0 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/RenameFailedException.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/RenameFailedException.java @@ -31,8 +31,7 @@ class RenameFailedException extends PathIOException { */ private boolean exitCode = false; - RenameFailedException(final Path src, final Path optionalDest, - final String error) { + RenameFailedException(final Path src, final Path optionalDest, final String error) { super(src.toString(), error); setOperation("rename"); if (optionalDest != null) { diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/SemaphoredDelegatingExecutor.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/SemaphoredDelegatingExecutor.java index aec710b..782d9a5 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/SemaphoredDelegatingExecutor.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/SemaphoredDelegatingExecutor.java @@ -51,7 +51,7 @@ * /apache/s4/comm/staging/BlockingThreadPoolExecutorService.java) */ @InterfaceAudience.Private -class SemaphoredDelegatingExecutor extends ForwardingListeningExecutorService { +public class SemaphoredDelegatingExecutor extends ForwardingListeningExecutorService { /** * Number of permits queued. */ @@ -74,9 +74,7 @@ class SemaphoredDelegatingExecutor extends ForwardingListeningExecutorService { * @param permitSize number of permits into the queue permitted * @param fair should the semaphore be "fair" */ - SemaphoredDelegatingExecutor( - final ListeningExecutorService listExecutorDelegatee, - final int permitSize, + public SemaphoredDelegatingExecutor(final ListeningExecutorService listExecutorDelegatee, final int permitSize, final boolean fair) { this.permitCount = permitSize; queueingPermits = new Semaphore(permitSize, fair); @@ -90,30 +88,26 @@ protected ListeningExecutorService delegate() { @NotNull @Override - public List> invokeAll( - @NotNull final Collection> tasks) { + public List> invokeAll(@NotNull final Collection> tasks) { throw new RuntimeException("Not implemented"); } @NotNull @Override - public List> invokeAll( - @NotNull final Collection> tasks, - final long timeout, @NotNull final TimeUnit unit) { + public List> invokeAll(@NotNull final Collection> tasks, final long timeout, + @NotNull final TimeUnit unit) { throw new RuntimeException("Not implemented"); } @NotNull @Override - public T invokeAny( - @NotNull final Collection> tasks) { + public T invokeAny(@NotNull final Collection> tasks) { throw new RuntimeException("Not implemented"); } @Override - public T invokeAny( - @NotNull final Collection> tasks, - final long timeout, @NotNull final TimeUnit unit) { + public T invokeAny(@NotNull final Collection> tasks, final long timeout, + @NotNull final TimeUnit unit) { throw new RuntimeException("Not implemented"); } @@ -131,8 +125,7 @@ public ListenableFuture submit(@NotNull final Callable task) { @NotNull @Override - public ListenableFuture submit(@NotNull final Runnable task, - @NotNull final T result) { + public ListenableFuture submit(@NotNull final Runnable task, @NotNull final T result) { try { queueingPermits.acquire(); } catch (InterruptedException e) { @@ -194,8 +187,7 @@ public int getPermitCount() { @Override public String toString() { - return "SemaphoredDelegatingExecutor{" + "permitCount=" - + getPermitCount() + ", available=" + return "SemaphoredDelegatingExecutor{" + "permitCount=" + getPermitCount() + ", available=" + getAvailablePermits() + ", waiting=" + getWaitingCount() + '}'; } diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/SseWrapper.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/SseWrapper.java index a16c466..9f63004 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/SseWrapper.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/SseWrapper.java @@ -1,17 +1,17 @@ package org.apache.hadoop.fs.obs; +import static org.apache.hadoop.fs.obs.OBSConstants.SSE_KEY; +import static org.apache.hadoop.fs.obs.OBSConstants.SSE_TYPE; + import com.obs.services.model.SseCHeader; import com.obs.services.model.SseKmsHeader; import org.apache.hadoop.conf.Configuration; -import static org.apache.hadoop.fs.obs.OBSConstants.SSE_KEY; -import static org.apache.hadoop.fs.obs.OBSConstants.SSE_TYPE; - /** * Wrapper for Server-Side Encryption (SSE). */ -class SseWrapper { +public class SseWrapper { /** * SSE-KMS: Server-Side Encryption with Key Management Service. */ @@ -46,7 +46,7 @@ class SseWrapper { } } - boolean isSseCEnable() { + public boolean isSseCEnable() { return sseCHeader != null; } @@ -54,7 +54,7 @@ boolean isSseKmsEnable() { return sseKmsHeader != null; } - SseCHeader getSseCHeader() { + public SseCHeader getSseCHeader() { return sseCHeader; } diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/BasicInputPolicyFactory.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/BasicInputPolicyFactory.java new file mode 100644 index 0000000..408d09c --- /dev/null +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/BasicInputPolicyFactory.java @@ -0,0 +1,26 @@ +package org.apache.hadoop.fs.obs.input; + +import com.google.common.util.concurrent.ListeningExecutorService; + +import org.apache.hadoop.fs.FSInputStream; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.obs.OBSCommonUtils; +import org.apache.hadoop.fs.obs.OBSConstants; +import org.apache.hadoop.fs.obs.OBSFileSystem; + +/** + * 功能描述 + * + * @since 2021-03-10 + */ +public class BasicInputPolicyFactory implements InputPolicyFactory { + + @Override + public FSInputStream create(final OBSFileSystem obsFileSystem, String bucket, String key, Long contentLength, + FileSystem.Statistics statistics, ListeningExecutorService boundedThreadPool) { + long readAheadRange = OBSCommonUtils.longBytesOption(obsFileSystem.getConf(), OBSConstants.READAHEAD_RANGE, + OBSConstants.DEFAULT_READAHEAD_RANGE, 0); + return new OBSInputStream(bucket, key, contentLength, obsFileSystem.getObsClient(), statistics, readAheadRange, + obsFileSystem); + } +} diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/ExtendInputPolicyFactory.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/ExtendInputPolicyFactory.java new file mode 100644 index 0000000..0bae43c --- /dev/null +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/ExtendInputPolicyFactory.java @@ -0,0 +1,40 @@ +package org.apache.hadoop.fs.obs.input; + +import com.google.common.util.concurrent.ListeningExecutorService; + +import org.apache.hadoop.fs.FSInputStream; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.obs.OBSCommonUtils; +import org.apache.hadoop.fs.obs.OBSConstants; +import org.apache.hadoop.fs.obs.OBSFileSystem; +import org.apache.hadoop.fs.obs.SemaphoredDelegatingExecutor; + +import java.io.IOException; + +/** + * 功能描述 + * + * @since 2021-03-10 + */ +public class ExtendInputPolicyFactory implements InputPolicyFactory { + + /** + * Create a temp file and a {@link } instance to manage it. + * + * @param bucket block index + * @param key limit of the block. + * @return the new block + * @throws IOException IO problems + */ + @Override + public FSInputStream create(final OBSFileSystem obsFileSystem, String bucket, String key, Long contentLength, + FileSystem.Statistics statistics, ListeningExecutorService boundedThreadPool) { + + int maxReadAhead = OBSCommonUtils.intOption(obsFileSystem.getConf(), OBSConstants.READAHEAD_MAX_NUM, + OBSConstants.DEFAULT_READAHEAD_MAX_NUM, 1); + + return new OBSExtendInputStream(obsFileSystem, obsFileSystem.getConf(), + new SemaphoredDelegatingExecutor(boundedThreadPool, maxReadAhead, true), bucket, key, contentLength, + statistics); + } +} diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/InputPolicyFactory.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/InputPolicyFactory.java new file mode 100644 index 0000000..374930c --- /dev/null +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/InputPolicyFactory.java @@ -0,0 +1,17 @@ +package org.apache.hadoop.fs.obs.input; + +import com.google.common.util.concurrent.ListeningExecutorService; + +import org.apache.hadoop.fs.FSInputStream; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.obs.OBSFileSystem; + +/** + * read Policy Factory + * + * @since 2021-03-10 + */ +public interface InputPolicyFactory { + FSInputStream create(final OBSFileSystem obsFileSystem, String bucket, String key, Long contentLength, + FileSystem.Statistics statistics, ListeningExecutorService boundedThreadPool); +} diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/InputPolicys.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/InputPolicys.java new file mode 100644 index 0000000..1b5b73d --- /dev/null +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/InputPolicys.java @@ -0,0 +1,28 @@ +package org.apache.hadoop.fs.obs.input; + +import org.apache.hadoop.fs.obs.OBSConstants; + +/** + * Set of classes to support output streaming into blocks which are then + * uploaded as to OBS as a single PUT, or as part of a multipart request. + */ +public final class InputPolicys { + + /** + * Create a factory. + * + * @param name factory name -the option from {@link OBSConstants}. + * @return the factory, ready to be initialized. + * @throws IllegalArgumentException if the name is unknown. + */ + public static InputPolicyFactory createFactory(final String name) { + switch (name) { + case OBSConstants.READAHEAD_POLICY_PRIMARY: + return new BasicInputPolicyFactory(); + case OBSConstants.READAHEAD_POLICY_ADVANCE: + return new ExtendInputPolicyFactory(); + default: + throw new IllegalArgumentException("Unsupported block buffer" + " \"" + name + '"'); + } + } +} diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/OBSExtendInputStream.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/OBSExtendInputStream.java new file mode 100644 index 0000000..73c4fae --- /dev/null +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/OBSExtendInputStream.java @@ -0,0 +1,355 @@ +package org.apache.hadoop.fs.obs.input; + +import com.google.common.base.Preconditions; +import com.google.common.util.concurrent.MoreExecutors; +import com.obs.services.ObsClient; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.ByteBufferReadable; +import org.apache.hadoop.fs.CanSetReadahead; +import org.apache.hadoop.fs.FSExceptionMessages; +import org.apache.hadoop.fs.FSInputStream; +import org.apache.hadoop.fs.FileSystem.Statistics; +import org.apache.hadoop.fs.obs.OBSConstants; +import org.apache.hadoop.fs.obs.OBSFileSystem; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.EOFException; +import java.io.IOException; +import java.nio.ByteBuffer; +import java.util.ArrayDeque; +import java.util.Queue; +import java.util.concurrent.ExecutorService; + +/** + * The input stream for OSS blob system. + * The class uses multi-part downloading to read data from the object content + * stream. + */ +public class OBSExtendInputStream extends FSInputStream implements CanSetReadahead, ByteBufferReadable { + public static final Logger LOG = LoggerFactory.getLogger(OBSExtendInputStream.class); + + private OBSFileSystem fs; + + private final ObsClient client; + + private Statistics statistics; + + private final String bucketName; + + private final String key; + + private long contentLength; + + private boolean closed; + + private int maxReadAhead; + + private long readaheadSize; + + private long pos; + + private long nextPos; + + private long lastBufferStart; + + private byte[] buffer; + + private long bufferRemaining; + + private ExecutorService readAheadExecutorService; + + private Queue readAheadBufferQueue = new ArrayDeque<>(); + + public OBSExtendInputStream(final OBSFileSystem obsFileSystem, Configuration conf, + ExecutorService readAheadExecutorService, String bucketName, String key, Long contentLength, + Statistics statistics) { + LOG.info("use OBSExtendInputStream"); + this.fs = obsFileSystem; + this.client = fs.getObsClient(); + this.statistics = statistics; + + this.bucketName = bucketName; + this.key = key; + this.contentLength = contentLength; + + readaheadSize = conf.getLong(OBSConstants.READAHEAD_RANGE, OBSConstants.DEFAULT_READAHEAD_RANGE); + this.maxReadAhead = conf.getInt(OBSConstants.READAHEAD_MAX_NUM, OBSConstants.DEFAULT_READAHEAD_MAX_NUM); + this.readAheadExecutorService = MoreExecutors.listeningDecorator(readAheadExecutorService); + + this.nextPos = 0; + this.lastBufferStart = -1; + + pos = 0; + bufferRemaining = 0; + closed = false; + } + + private void validateAndResetReopen(long pos) throws EOFException { + if (pos < 0) { + throw new EOFException("Cannot seek at negative position:" + pos); + } else if (pos > contentLength) { + throw new EOFException("Cannot seek after EOF, contentLength:" + contentLength + " position:" + pos); + } + if (this.buffer != null) { + if (LOG.isDebugEnabled()) { + LOG.debug("Aborting old stream to open at pos " + pos); + } + this.buffer = null; + } + } + + private boolean isRandom(long position) { + + boolean isRandom = true; + + if (position == this.nextPos) { + isRandom = false; + } else { + //new seek, remove cache buffers if its byteStart is not equal to pos + while (readAheadBufferQueue.size() != 0) { + if (readAheadBufferQueue.element().getByteStart() != position) { + readAheadBufferQueue.poll(); + } else { + break; + } + } + } + return isRandom; + } + + private void getFromBuffer() throws IOException { + + ReadAheadBuffer readBuffer = readAheadBufferQueue.poll(); + readBuffer.lock(); + try { + readBuffer.await(ReadAheadBuffer.STATUS.INIT); + if (readBuffer.getStatus() == ReadAheadBuffer.STATUS.ERROR) { + this.buffer = null; + } else { + this.buffer = readBuffer.getBuffer(); + } + } catch (InterruptedException e) { + LOG.warn("interrupted when wait a read buffer"); + } finally { + readBuffer.unlock(); + } + + if (this.buffer == null) { + throw new IOException("Null IO stream"); + } + } + + /** + * Reopen the wrapped stream at give position, by seeking for + * data of a part length from object content stream. + * + * @param position position from start of a file + * @throws IOException if failed to reopen + */ + private synchronized void reopen(long position) throws IOException { + validateAndResetReopen(position); + + long partSize = position + readaheadSize > contentLength ? contentLength - position : readaheadSize; + boolean isRandom = isRandom(position); + this.nextPos = position + partSize; + int currentSize = readAheadBufferQueue.size(); + if (currentSize == 0) { + lastBufferStart = position - partSize; + } else { + ReadAheadBuffer[] readBuffers = readAheadBufferQueue.toArray(new ReadAheadBuffer[currentSize]); + lastBufferStart = readBuffers[currentSize - 1].getByteStart(); + } + + int maxLen = this.maxReadAhead - currentSize; + for (int i = 0; i < maxLen && i < (currentSize + 1) * 2; i++) { + if (lastBufferStart + partSize * (i + 1) >= contentLength) { + break; + } + + long byteStart = lastBufferStart + partSize * (i + 1); + long byteEnd = byteStart + partSize - 1; + if (byteEnd >= contentLength) { + byteEnd = contentLength - 1; + } + + ReadAheadBuffer readBuffer = new ReadAheadBuffer(byteStart, byteEnd); + if (readBuffer.getBuffer().length == 0) { + readBuffer.setStatus(ReadAheadBuffer.STATUS.SUCCESS); + } else { + this.readAheadExecutorService.execute(new ReadAheadTask(bucketName, key, client, readBuffer)); + } + readAheadBufferQueue.add(readBuffer); + if (isRandom) { + break; + } + } + getFromBuffer(); + pos = position; + bufferRemaining = partSize; + } + + @Override + public synchronized int read() throws IOException { + checkNotClosed(); + if (bufferRemaining <= 0 && pos < contentLength) { + reopen(pos); + } + + int byteRead = -1; + if (bufferRemaining != 0) { + byteRead = this.buffer[this.buffer.length - (int) bufferRemaining] & 0xFF; + } + if (byteRead >= 0) { + pos++; + bufferRemaining--; + } + + incrementBytesRead(byteRead); + + return byteRead; + } + + private void checkNotClosed() throws IOException { + if (closed) { + throw new IOException(FSExceptionMessages.STREAM_IS_CLOSED); + } + } + + private void validateReadArgs(byte[] buf, int off, int len) { + if (buf == null) { + throw new NullPointerException(); + } else if (off < 0 || len < 0 || len > buf.length - off) { + throw new IndexOutOfBoundsException(); + } + } + + private void incrementBytesRead(long bytesRead) { + if (statistics != null && bytesRead > 0) { + statistics.incrementBytesRead(bytesRead); + } + } + + @Override + public synchronized int read(byte[] buf, int off, int len) throws IOException { + checkNotClosed(); + validateReadArgs(buf, off, len); + if (len == 0) { + return 0; + } + int byteRead = 0; + while (pos < contentLength && byteRead < len) { + if (bufferRemaining == 0) { + reopen(pos); + } + + int bytes = 0; + for (int i = this.buffer.length - (int) bufferRemaining; i < this.buffer.length; i++) { + buf[off + byteRead] = this.buffer[i]; + bytes++; + byteRead++; + if (off + byteRead >= len) { + break; + } + } + + if (bytes > 0) { + pos += bytes; + bufferRemaining -= bytes; + } else if (bufferRemaining != 0) { + throw new IOException("Failed to read from stream. Remaining:" + bufferRemaining); + } + } + + incrementBytesRead(byteRead); + + if (byteRead == 0 && len > 0) { + return -1; + } else { + return byteRead; + } + } + + @Override + public synchronized void close() { + if (closed) { + return; + } + closed = true; + this.buffer = null; + } + + @Override + public synchronized int available() throws IOException { + checkNotClosed(); + + long remain = contentLength - pos; + if (remain > Integer.MAX_VALUE) { + return Integer.MAX_VALUE; + } + return (int) remain; + } + + @Override + public synchronized void seek(long position) throws IOException { + checkNotClosed(); + if (position < 0) { + throw new EOFException(FSExceptionMessages.NEGATIVE_SEEK + " " + position); + } + + if (this.contentLength <= 0) { + return; + } + if (pos == position) { + return; + } else if (position > pos && position < pos + bufferRemaining) { + long len = position - pos; + pos = position; + bufferRemaining -= len; + } else { + pos = position; + bufferRemaining = 0; + } + } + + @Override + public synchronized long getPos() throws IOException { + checkNotClosed(); + return pos; + } + + @Override + public boolean seekToNewSource(long targetPos) throws IOException { + checkNotClosed(); + return false; + } + + @Override + public int read(ByteBuffer byteBuffer) throws IOException { + int len = byteBuffer.remaining(); + if (len == 0) { + return 0; + } + + byte[] buf = new byte[len]; + int size = read(buf, 0, len); + if (size != -1) { + byteBuffer.put(buf, 0, size); + } + + return size; + } + + @Override + public synchronized void setReadahead(Long readahead) throws IOException { + checkNotClosed(); + if (readahead == null) { + this.readaheadSize = OBSConstants.DEFAULT_READAHEAD_RANGE; + } else { + Preconditions.checkArgument(readahead >= 0, "Negative readahead value"); + this.readaheadSize = readahead; + } + } + +} diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/OBSInputStream.java similarity index 69% rename from hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java rename to hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/OBSInputStream.java index 2c7a8ec..c840653 100644 --- a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/OBSInputStream.java +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/OBSInputStream.java @@ -1,24 +1,4 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.hadoop.fs.obs; - -import static org.apache.hadoop.fs.obs.OBSCommonUtils.translateException; +package org.apache.hadoop.fs.obs.input; import com.google.common.base.Preconditions; import com.obs.services.ObsClient; @@ -34,6 +14,11 @@ import org.apache.hadoop.fs.FSExceptionMessages; import org.apache.hadoop.fs.FSInputStream; import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.obs.BasicMetricsConsumer; +import org.apache.hadoop.fs.obs.OBSCommonUtils; +import org.apache.hadoop.fs.obs.OBSConstants; +import org.apache.hadoop.fs.obs.OBSFileSystem; +import org.apache.hadoop.fs.obs.OBSIOException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -62,13 +47,11 @@ */ @InterfaceAudience.Private @InterfaceStability.Evolving -class OBSInputStream extends FSInputStream - implements CanSetReadahead, ByteBufferReadable { +public class OBSInputStream extends FSInputStream implements CanSetReadahead, ByteBufferReadable { /** * Class logger. */ - public static final Logger LOG = LoggerFactory.getLogger( - OBSInputStream.class); + public static final Logger LOG = LoggerFactory.getLogger(OBSInputStream.class); /** * The statistics for OBS file system. @@ -144,20 +127,12 @@ class OBSInputStream extends FSInputStream */ private long contentRangeStart; - OBSInputStream( - final String bucketName, - final String bucketKey, - final long fileStatusLength, - final ObsClient obsClient, - final FileSystem.Statistics stats, - final long readAheadRangeValue, + OBSInputStream(final String bucketName, final String bucketKey, final long fileStatusLength, + final ObsClient obsClient, final FileSystem.Statistics stats, final long readAheadRangeValue, final OBSFileSystem obsFileSystem) { - Preconditions.checkArgument(StringUtils.isNotEmpty(bucketName), - "No Bucket"); - Preconditions.checkArgument(StringUtils.isNotEmpty(bucketKey), - "No Key"); - Preconditions.checkArgument(fileStatusLength >= 0, - "Negative content length"); + Preconditions.checkArgument(StringUtils.isNotEmpty(bucketName), "No Bucket"); + Preconditions.checkArgument(StringUtils.isNotEmpty(bucketKey), "No Key"); + Preconditions.checkArgument(fileStatusLength >= 0, "Negative content length"); this.bucket = bucketName; this.key = bucketKey; this.contentLength = fileStatusLength; @@ -179,12 +154,10 @@ class OBSInputStream extends FSInputStream * @param readahead current readahead value * @return the absolute value of the limit of the request. */ - static long calculateRequestLimit( - final long targetPos, final long length, final long contentLength, + static long calculateRequestLimit(final long targetPos, final long length, final long contentLength, final long readahead) { // cannot read past the end of the object - return Math.min(contentLength, length < 0 ? contentLength - : targetPos + Math.max(readahead, length)); + return Math.min(contentLength, length < 0 ? contentLength : targetPos + Math.max(readahead, length)); } /** @@ -195,18 +168,14 @@ static long calculateRequestLimit( * @param length length requested * @throws IOException on any failure to open the object */ - private synchronized void reopen(final String reason, final long targetPos, - final long length) - throws IOException { + private synchronized void reopen(final String reason, final long targetPos, final long length) throws IOException { long startTime = System.currentTimeMillis(); long threadId = Thread.currentThread().getId(); if (wrappedStream != null) { closeStream("reopen(" + reason + ")", contentRangeFinish); } - contentRangeFinish = - calculateRequestLimit(targetPos, length, contentLength, - readAheadRange); + contentRangeFinish = calculateRequestLimit(targetPos, length, contentLength, readAheadRange); try { GetObjectRequest request = new GetObjectRequest(bucket, key); @@ -218,29 +187,17 @@ private synchronized void reopen(final String reason, final long targetPos, wrappedStream = client.getObject(request).getObjectContent(); contentRangeStart = targetPos; if (wrappedStream == null) { - throw new IOException( - "Null IO stream from reopen of (" + reason + ") " + uri); + throw new IOException("Null IO stream from reopen of (" + reason + ") " + uri); } } catch (ObsException e) { - throw translateException("Reopen at position " + targetPos, uri, e); + throw OBSCommonUtils.translateException("Reopen at position " + targetPos, uri, e); } this.streamCurrentPos = targetPos; long endTime = System.currentTimeMillis(); - LOG.debug( - "reopen({}) for {} range[{}-{}], length={}," - + " streamPosition={}, nextReadPosition={}, thread={}, " - + "timeUsedInMilliSec={}", - uri, - reason, - targetPos, - contentRangeFinish, - length, - streamCurrentPos, - nextReadPos, - threadId, - endTime - startTime - ); + LOG.debug("reopen({}) for {} range[{}-{}], length={}," + " streamPosition={}, nextReadPosition={}, thread={}, " + + "timeUsedInMilliSec={}", uri, reason, targetPos, contentRangeFinish, length, streamCurrentPos, + nextReadPos, threadId, endTime - startTime); } @Override @@ -257,8 +214,7 @@ public synchronized void seek(final long targetPos) throws IOException { // Do not allow negative seek if (targetPos < 0) { - throw new EOFException( - FSExceptionMessages.NEGATIVE_SEEK + " " + targetPos); + throw new EOFException(FSExceptionMessages.NEGATIVE_SEEK + " " + targetPos); } if (this.contentLength <= 0) { @@ -279,8 +235,7 @@ private void seekQuietly(final long positiveTargetPos) { try { seek(positiveTargetPos); } catch (IOException ioe) { - LOG.debug("Ignoring IOE on seek of {} to {}", uri, - positiveTargetPos, ioe); + LOG.debug("Ignoring IOE on seek of {} to {}", uri, positiveTargetPos, ioe); } } @@ -307,10 +262,8 @@ private void seekInStream(final long targetPos) throws IOException { // then choose whichever comes first: the range or the EOF long remainingInCurrentRequest = remainingInCurrentRequest(); - long forwardSeekLimit = Math.min(remainingInCurrentRequest, - forwardSeekRange); - boolean skipForward = remainingInCurrentRequest > 0 - && diff <= forwardSeekLimit; + long forwardSeekLimit = Math.min(remainingInCurrentRequest, forwardSeekRange); + boolean skipForward = remainingInCurrentRequest > 0 && diff <= forwardSeekLimit; if (skipForward) { // the forward seek range is within the limits LOG.debug("Forward seek on {}, of {} bytes", uri, diff); @@ -327,8 +280,7 @@ private void seekInStream(final long targetPos) throws IOException { return; } else { // log a warning; continue to attempt to re-open - LOG.info("Failed to seek on {} to {}. Current position {}", - uri, targetPos, streamCurrentPos); + LOG.info("Failed to seek on {} to {}. Current position {}", uri, targetPos, streamCurrentPos); } } } else if (diff == 0 && remainingInCurrentRequest() > 0) { @@ -358,24 +310,20 @@ public boolean seekToNewSource(final long targetPos) throws IOException { * @param len length of the content that needs to be read * @throws IOException on any failure to lazy seek */ - private void lazySeek(final long targetPos, final long len) - throws IOException { + private void lazySeek(final long targetPos, final long len) throws IOException { int retryTime = 0; long delayMs; long startTime = System.currentTimeMillis(); - while (System.currentTimeMillis() - startTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + while (System.currentTimeMillis() - startTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { try { // For lazy seek seekInStream(targetPos); } catch (IOException e) { if (wrappedStream != null) { - closeStream("lazySeek() seekInStream has exception ", - this.contentRangeFinish); + closeStream("lazySeek() seekInStream has exception ", this.contentRangeFinish); } - LOG.warn("IOException occurred in lazySeek, retry: {}", - retryTime, e); + LOG.warn("IOException occurred in lazySeek, retry: {}", retryTime, e); delayMs = OBSCommonUtils.getSleepTimeInMs(retryTime); retryTime++; if (System.currentTimeMillis() - startTime + delayMs @@ -400,8 +348,7 @@ private void lazySeek(final long targetPos, final long len) return; } catch (OBSIOException e) { - LOG.debug("IOException occurred in lazySeek, retry: {}", - retryTime, e); + LOG.debug("IOException occurred in lazySeek, retry: {}", retryTime, e); delayMs = OBSCommonUtils.getSleepTimeInMs(retryTime); retryTime++; if (System.currentTimeMillis() - startTime + delayMs @@ -446,8 +393,7 @@ public synchronized int read() throws IOException { long startTime = System.currentTimeMillis(); long threadId = Thread.currentThread().getId(); long endTime; - boolean isTrue = - this.contentLength == 0 || nextReadPos >= contentLength; + boolean isTrue = this.contentLength == 0 || nextReadPos >= contentLength; if (isTrue) { return -1; @@ -476,9 +422,7 @@ public synchronized int read() throws IOException { } catch (IOException e) { exception = e; onReadFailure(e, 1); - LOG.debug( - "read of [{}] failed, retry time[{}], due to exception[{}]", - uri, retryTime, e); + LOG.debug("read of [{}] failed, retry time[{}], due to exception[{}]", uri, retryTime, e); delayMs = OBSCommonUtils.getSleepTimeInMs(retryTime); retryTime++; if (System.currentTimeMillis() - startTime + delayMs @@ -486,30 +430,22 @@ public synchronized int read() throws IOException { try { sleepInLock(delayMs); } catch (InterruptedException ie) { - LOG.error( - "read of [{}] failed, retry time[{}], due to " - + "exception[{}]", - uri, retryTime, e); + LOG.error("read of [{}] failed, retry time[{}], due to " + "exception[{}]", uri, retryTime, e); throw e; } } } - } while (System.currentTimeMillis() - retryStartTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY); + } while (System.currentTimeMillis() - retryStartTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY); if (exception != null) { endTime = System.currentTimeMillis(); if (fs.getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.BYTEBUF, - BasicMetricsConsumer.MetricRecord.READ, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.BYTEBUF, BasicMetricsConsumer.MetricRecord.READ, false, + endTime - startTime); OBSCommonUtils.setMetricsInfo(fs, record); } - LOG.error( - "read of [{}] failed, retry time[{}], due to exception[{}]", - uri, retryTime, exception); + LOG.error("read of [{}] failed, retry time[{}], due to exception[{}]", uri, retryTime, exception); throw exception; } @@ -525,17 +461,13 @@ public synchronized int read() throws IOException { endTime = System.currentTimeMillis(); long position = byteRead >= 0 ? nextReadPos - 1 : nextReadPos; if (fs.getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.ONEBYTE, - BasicMetricsConsumer.MetricRecord.READ, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.ONEBYTE, BasicMetricsConsumer.MetricRecord.READ, true, + endTime - startTime); OBSCommonUtils.setMetricsInfo(fs, record); } - LOG.debug( - "read-0arg uri:{}, contentLength:{}, position:{}, readValue:{}, " - + "thread:{}, timeUsedMilliSec:{}", uri, contentLength, - position, byteRead, threadId, endTime - startTime); + LOG.debug("read-0arg uri:{}, contentLength:{}, position:{}, readValue:{}, " + "thread:{}, timeUsedMilliSec:{}", + uri, contentLength, position, byteRead, threadId, endTime - startTime); return byteRead; } @@ -547,24 +479,18 @@ public synchronized int read() throws IOException { * @param length length of data being attempted to read * @throws IOException any exception thrown on the re-open attempt. */ - private synchronized void onReadFailure(final IOException ioe, - final int length) throws IOException { - LOG.debug( - "Got exception while trying to read from stream {}" - + " trying to recover: " + ioe, uri); + private synchronized void onReadFailure(final IOException ioe, final int length) throws IOException { + LOG.debug("Got exception while trying to read from stream {}" + " trying to recover: " + ioe, uri); int retryTime = 0; long delayMs; long startTime = System.currentTimeMillis(); - while (System.currentTimeMillis() - startTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + while (System.currentTimeMillis() - startTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { try { reopen("failure recovery", streamCurrentPos, length); return; } catch (OBSIOException e) { - LOG.debug( - "OBSIOException occurred in reopen for failure recovery, " - + "the {} retry time", - retryTime, e); + LOG.debug("OBSIOException occurred in reopen for failure recovery, " + "the {} retry time", retryTime, + e); delayMs = OBSCommonUtils.getSleepTimeInMs(retryTime); retryTime++; try { @@ -579,8 +505,7 @@ private synchronized void onReadFailure(final IOException ioe, } @Override - public synchronized int read(final ByteBuffer byteBuffer) - throws IOException { + public synchronized int read(final ByteBuffer byteBuffer) throws IOException { fs.checkOpen(); checkStreamOpen(); long startTime = System.currentTimeMillis(); @@ -594,8 +519,7 @@ public synchronized int read(final ByteBuffer byteBuffer) } byte[] buf = new byte[len]; - boolean isTrue = - this.contentLength == 0 || nextReadPos >= contentLength; + boolean isTrue = this.contentLength == 0 || nextReadPos >= contentLength; if (isTrue) { return -1; } @@ -615,8 +539,7 @@ public synchronized int read(final ByteBuffer byteBuffer) long startRetryTime = System.currentTimeMillis(); do { try { - bytesRead = tryToReadFromInputStream(wrappedStream, buf, 0, - len); + bytesRead = tryToReadFromInputStream(wrappedStream, buf, 0, len); if (bytesRead == -1) { return -1; } @@ -628,10 +551,8 @@ public synchronized int read(final ByteBuffer byteBuffer) } catch (IOException e) { exception = e; onReadFailure(e, len); - LOG.debug( - "read len[{}] of [{}] failed, retry time[{}], " - + "due to exception[{}]", - len, uri, retryTime, exception); + LOG.debug("read len[{}] of [{}] failed, retry time[{}], " + "due to exception[{}]", len, uri, retryTime, + exception); delayMs = OBSCommonUtils.getSleepTimeInMs(retryTime); retryTime++; if (System.currentTimeMillis() - startTime + delayMs @@ -639,31 +560,24 @@ public synchronized int read(final ByteBuffer byteBuffer) try { sleepInLock(delayMs); } catch (InterruptedException ie) { - LOG.error( - "read len[{}] of [{}] failed, retry time[{}], " - + "due to exception[{}]", - len, uri, retryTime, exception); + LOG.error("read len[{}] of [{}] failed, retry time[{}], " + "due to exception[{}]", len, uri, + retryTime, exception); throw exception; } } } - } while (System.currentTimeMillis() - startRetryTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY); + } while (System.currentTimeMillis() - startRetryTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY); if (exception != null) { endTime = System.currentTimeMillis(); if (fs.getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.BYTEBUF, - BasicMetricsConsumer.MetricRecord.READ, - false, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.BYTEBUF, BasicMetricsConsumer.MetricRecord.READ, false, + endTime - startTime); OBSCommonUtils.setMetricsInfo(fs, record); } - LOG.error( - "read len[{}] of [{}] failed, retry time[{}], " - + "due to exception[{}]", - len, uri, retryTime, exception); + LOG.error("read len[{}] of [{}] failed, retry time[{}], " + "due to exception[{}]", len, uri, retryTime, + exception); throw exception; } @@ -677,24 +591,19 @@ public synchronized int read(final ByteBuffer byteBuffer) endTime = System.currentTimeMillis(); if (fs.getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.BYTEBUF, - BasicMetricsConsumer.MetricRecord.READ, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.BYTEBUF, BasicMetricsConsumer.MetricRecord.READ, true, + endTime - startTime); OBSCommonUtils.setMetricsInfo(fs, record); } - LOG.debug( - "Read-ByteBuffer uri:{}, contentLength:{}, destLen:{}, readLen:{}, " - + "position:{}, thread:{}, timeUsedMilliSec:{}", - uri, contentLength, len, bytesRead, position, threadId, + LOG.debug("Read-ByteBuffer uri:{}, contentLength:{}, destLen:{}, readLen:{}, " + + "position:{}, thread:{}, timeUsedMilliSec:{}", uri, contentLength, len, bytesRead, position, threadId, endTime - startTime); return bytesRead; } - private int tryToReadFromInputStream(final InputStream in, - final byte[] buf, - final int off, final int len) throws IOException { + private int tryToReadFromInputStream(final InputStream in, final byte[] buf, final int off, final int len) + throws IOException { int bytesRead = 0; while (bytesRead < len) { int bytes = in.read(buf, off + bytesRead, len - bytesRead); @@ -721,8 +630,7 @@ private int tryToReadFromInputStream(final InputStream in, * @throws IOException if there are other problems */ @Override - public synchronized int read(@NotNull final byte[] buf, final int off, - final int len) throws IOException { + public synchronized int read(@NotNull final byte[] buf, final int off, final int len) throws IOException { fs.checkOpen(); checkStreamOpen(); long startTime = System.currentTimeMillis(); @@ -734,8 +642,7 @@ public synchronized int read(@NotNull final byte[] buf, final int off, return 0; } - boolean isTrue = - this.contentLength == 0 || nextReadPos >= contentLength; + boolean isTrue = this.contentLength == 0 || nextReadPos >= contentLength; if (isTrue) { return -1; } @@ -755,9 +662,7 @@ public synchronized int read(@NotNull final byte[] buf, final int off, long startRetryTime = System.currentTimeMillis(); do { try { - bytesRead = tryToReadFromInputStream(wrappedStream, buf, - off, - len); + bytesRead = tryToReadFromInputStream(wrappedStream, buf, off, len); if (bytesRead == -1) { return -1; } @@ -769,10 +674,8 @@ public synchronized int read(@NotNull final byte[] buf, final int off, } catch (IOException e) { exception = e; onReadFailure(e, len); - LOG.debug( - "read offset[{}] len[{}] of [{}] failed, retry time[{}], " - + "due to exception[{}]", - off, len, uri, retryTime, exception); + LOG.debug("read offset[{}] len[{}] of [{}] failed, retry time[{}], " + "due to exception[{}]", off, len, + uri, retryTime, exception); delayMs = OBSCommonUtils.getSleepTimeInMs(retryTime); retryTime++; if (System.currentTimeMillis() - startTime + delayMs @@ -780,33 +683,26 @@ public synchronized int read(@NotNull final byte[] buf, final int off, try { sleepInLock(delayMs); } catch (InterruptedException ie) { - LOG.error( - "read offset[{}] len[{}] of [{}] failed, " - + "retry time[{}], due to exception[{}]", + LOG.error("read offset[{}] len[{}] of [{}] failed, " + "retry time[{}], due to exception[{}]", off, len, uri, retryTime, exception); throw exception; } } } - } while (System.currentTimeMillis() - startRetryTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY); + } while (System.currentTimeMillis() - startRetryTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY); long costTime; if (exception != null) { endTime = System.currentTimeMillis(); if (fs.getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.SEQ, - BasicMetricsConsumer.MetricRecord.READ, false, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.SEQ, BasicMetricsConsumer.MetricRecord.READ, false, + endTime - startTime); OBSCommonUtils.setMetricsInfo(fs, record); } - LOG.error( - "read offset[{}] len[{}] of [{}] failed, retry time[{}], " - + "due to exception[{}]", - off, len, uri, retryTime, exception); + LOG.error("read offset[{}] len[{}] of [{}] failed, retry time[{}], " + "due to exception[{}]", off, len, + uri, retryTime, exception); throw exception; } @@ -821,20 +717,16 @@ public synchronized int read(@NotNull final byte[] buf, final int off, readMetric(costTime); long position = bytesRead >= 0 ? nextReadPos - 1 : nextReadPos; - LOG.debug( - "Read-3args uri:{}, contentLength:{}, destLen:{}, readLen:{}, " - + "position:{}, thread:{}, timeUsedMilliSec:{}", - uri, contentLength, len, bytesRead, position, threadId, + LOG.debug("Read-3args uri:{}, contentLength:{}, destLen:{}, readLen:{}, " + + "position:{}, thread:{}, timeUsedMilliSec:{}", uri, contentLength, len, bytesRead, position, threadId, endTime - startTime); return bytesRead; } private void readMetric(long costTime) { if (fs.getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.SEQ, - BasicMetricsConsumer.MetricRecord.READ, true, costTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.SEQ, BasicMetricsConsumer.MetricRecord.READ, true, costTime); OBSCommonUtils.setMetricsInfo(fs, record); } } @@ -847,8 +739,7 @@ private void readMetric(long costTime) { */ private void checkStreamOpen() throws IOException { if (closed) { - throw new IOException( - uri + ": " + FSExceptionMessages.STREAM_IS_CLOSED); + throw new IOException(uri + ": " + FSExceptionMessages.STREAM_IS_CLOSED); } } @@ -874,11 +765,9 @@ public synchronized void close() throws IOException { long endTime = System.currentTimeMillis(); if (fs.getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.INPUT, - BasicMetricsConsumer.MetricRecord.CLOSE, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.INPUT, BasicMetricsConsumer.MetricRecord.CLOSE, true, + endTime - startTime); OBSCommonUtils.setMetricsInfo(fs, record); } } @@ -894,9 +783,7 @@ public synchronized void close() throws IOException { * @param length length of the stream * @throws IOException on any failure to close stream */ - private synchronized void closeStream(final String reason, - final long length) - throws IOException { + private synchronized void closeStream(final String reason, final long length) throws IOException { if (wrappedStream != null) { try { wrappedStream.close(); @@ -906,16 +793,8 @@ private synchronized void closeStream(final String reason, throw e; } - LOG.debug( - "Stream {} : {}; streamPos={}, nextReadPos={}," - + " request range {}-{} length={}", - uri, - reason, - streamCurrentPos, - nextReadPos, - contentRangeStart, - contentRangeFinish, - length); + LOG.debug("Stream {} : {}; streamPos={}, nextReadPos={}," + " request range {}-{} length={}", uri, reason, + streamCurrentPos, nextReadPos, contentRangeStart, contentRangeFinish, length); wrappedStream = null; } } @@ -970,18 +849,10 @@ public boolean markSupported() { @InterfaceStability.Unstable public String toString() { synchronized (this) { - return "OBSInputStream{" + uri - + " wrappedStream=" + (wrappedStream != null - ? "open" - : "closed") - + " streamCurrentPos=" + streamCurrentPos - + " nextReadPos=" + nextReadPos - + " contentLength=" + contentLength - + " contentRangeStart=" + contentRangeStart - + " contentRangeFinish=" + contentRangeFinish - + " remainingInCurrentRequest=" - + remainingInCurrentRequest() - + '}'; + return "OBSInputStream{" + uri + " wrappedStream=" + (wrappedStream != null ? "open" : "closed") + + " streamCurrentPos=" + streamCurrentPos + " nextReadPos=" + nextReadPos + " contentLength=" + + contentLength + " contentRangeStart=" + contentRangeStart + " contentRangeFinish=" + + contentRangeFinish + " remainingInCurrentRequest=" + remainingInCurrentRequest() + '}'; } } @@ -996,9 +867,7 @@ public String toString() { * synchronized for the duration of the sequence. {@inheritDoc} */ @Override - public void readFully(final long position, final byte[] buffer, - final int offset, - final int length) + public void readFully(final long position, final byte[] buffer, final int offset, final int length) throws IOException { fs.checkOpen(); checkStreamOpen(); @@ -1015,11 +884,9 @@ public void readFully(final long position, final byte[] buffer, try { seek(position); while (nread < length) { - int nbytes = read(buffer, offset + nread, - length - nread); + int nbytes = read(buffer, offset + nread, length - nread); if (nbytes < 0) { - throw new EOFException( - FSExceptionMessages.EOF_IN_READ_FULLY); + throw new EOFException(FSExceptionMessages.EOF_IN_READ_FULLY); } nread += nbytes; } @@ -1029,16 +896,12 @@ public void readFully(final long position, final byte[] buffer, } long endTime = System.currentTimeMillis(); if (fs.getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - null, BasicMetricsConsumer.MetricRecord.READFULLY, true, - endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord(null, + BasicMetricsConsumer.MetricRecord.READFULLY, true, endTime - startTime); OBSCommonUtils.setMetricsInfo(fs, record); } - LOG.debug( - "ReadFully uri:{}, contentLength:{}, destLen:{}, readLen:{}, " - + "position:{}, thread:{}, timeUsedMilliSec:{}", - uri, contentLength, length, nread, position, threadId, + LOG.debug("ReadFully uri:{}, contentLength:{}, destLen:{}, readLen:{}, " + + "position:{}, thread:{}, timeUsedMilliSec:{}", uri, contentLength, length, nread, position, threadId, endTime - startTime); } @@ -1053,10 +916,7 @@ public void readFully(final long position, final byte[] buffer, * @throws IOException on any failure to read */ @Override - public int read(final long position, final byte[] buffer, - final int offset, - final int length) - throws IOException { + public int read(final long position, final byte[] buffer, final int offset, final int length) throws IOException { fs.checkOpen(); checkStreamOpen(); int len = length; @@ -1077,11 +937,9 @@ public int read(final long position, final byte[] buffer, readSize = super.read(position, buffer, offset, len); endTime = System.currentTimeMillis(); if (fs.getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.RANDOM, - BasicMetricsConsumer.MetricRecord.READ, - true, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.RANDOM, BasicMetricsConsumer.MetricRecord.READ, true, + endTime - startTime); OBSCommonUtils.setMetricsInfo(fs, record); } return readSize; @@ -1089,19 +947,16 @@ public int read(final long position, final byte[] buffer, readSize = randomReadWithNewInputStream(position, buffer, offset, len); endTime = System.currentTimeMillis(); if (fs.getMetricSwitch()) { - BasicMetricsConsumer.MetricRecord record = - new BasicMetricsConsumer.MetricRecord( - BasicMetricsConsumer.MetricRecord.RANDOM, - BasicMetricsConsumer.MetricRecord.READ, - true, endTime - startTime); + BasicMetricsConsumer.MetricRecord record = new BasicMetricsConsumer.MetricRecord( + BasicMetricsConsumer.MetricRecord.RANDOM, BasicMetricsConsumer.MetricRecord.READ, true, + endTime - startTime); OBSCommonUtils.setMetricsInfo(fs, record); } return readSize; } - private int randomReadWithNewInputStream(final long position, - final byte[] buffer, final int offset, final int length) - throws IOException { + private int randomReadWithNewInputStream(final long position, final byte[] buffer, final int offset, + final int length) throws IOException { long startTime = System.currentTimeMillis(); long threadId = Thread.currentThread().getId(); int bytesRead = 0; @@ -1121,19 +976,12 @@ private int randomReadWithNewInputStream(final long position, exception = null; try { inputStream = client.getObject(request).getObjectContent(); - if (inputStream == null) { - break; - } } catch (ObsException e) { - exception = translateException( - "Read at position " + position, uri, e); + exception = OBSCommonUtils.translateException("Read at position " + position, uri, e); LOG.debug( - "read position[{}] destLen[{}] offset[{}] readLen[{}] " - + "of [{}] failed, retry time[{}], due to " - + "exception[{}]", - position, length, offset, bytesRead, uri, retryTime, - exception); + "read position[{}] destLen[{}] offset[{}] readLen[{}] " + "of [{}] failed, retry time[{}], due to " + + "exception[{}]", position, length, offset, bytesRead, uri, retryTime, exception); if (!(exception instanceof OBSIOException)) { throw exception; @@ -1142,8 +990,7 @@ private int randomReadWithNewInputStream(final long position, if (exception == null) { try { - bytesRead = tryToReadFromInputStream(inputStream, buffer, - offset, length); + bytesRead = tryToReadFromInputStream(inputStream, buffer, offset, length); if (bytesRead == -1) { return -1; } @@ -1156,67 +1003,53 @@ private int randomReadWithNewInputStream(final long position, } catch (IOException e) { exception = e; - LOG.debug( - "read position[{}] destLen[{}] offset[{}] readLen[{}] " - + "of [{}] failed, retry time[{}], due to " - + "exception[{}]", - position, length, offset, bytesRead, uri, retryTime, - exception); + LOG.debug("read position[{}] destLen[{}] offset[{}] readLen[{}] " + + "of [{}] failed, retry time[{}], due to " + "exception[{}]", position, length, offset, + bytesRead, uri, retryTime, exception); } finally { - inputStream.close(); + if (inputStream != null) { + inputStream.close(); + } } } delayMs = OBSCommonUtils.getSleepTimeInMs(retryTime); retryTime++; - if (System.currentTimeMillis() - startTime + delayMs - < OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { + if (System.currentTimeMillis() - startTime + delayMs < OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY) { try { Thread.sleep(delayMs); } catch (InterruptedException ie) { LOG.error( - "read position[{}] destLen[{}] offset[{}] " - + "readLen[{}] of [{}] failed, retry time[{}], " - + "due to exception[{}]", - position, length, offset, bytesRead, uri, - retryTime, exception); + "read position[{}] destLen[{}] offset[{}] " + "readLen[{}] of [{}] failed, retry time[{}], " + + "due to exception[{}]", position, length, offset, bytesRead, uri, retryTime, exception); throw exception; } } - } while (System.currentTimeMillis() - startRetryTime - <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY); + } while (System.currentTimeMillis() - startRetryTime <= OBSCommonUtils.MAX_TIME_IN_MILLISECONDS_TO_RETRY); if (inputStream == null || exception != null) { IOException e = new IOException( - "read failed of " + uri + ", inputStream is " - + (inputStream == null ? "null" : "not null"), - exception); + "read failed of " + uri + ", inputStream is " + (inputStream == null ? "null" : "not null"), exception); LOG.error( - "read position[{}] destLen[{}] offset[{}] len[{}] failed, " - + "retry time[{}], due to exception[{}]", - position, length, offset, bytesRead, retryTime, - exception); + "read position[{}] destLen[{}] offset[{}] len[{}] failed, " + "retry time[{}], due to exception[{}]", + position, length, offset, bytesRead, retryTime, exception); throw e; } long endTime = System.currentTimeMillis(); - LOG.debug( - "Read-4args uri:{}, contentLength:{}, destLen:{}, readLen:{}, " - + "position:{}, thread:{}, timeUsedMilliSec:{}", - uri, contentLength, length, bytesRead, position, threadId, + LOG.debug("Read-4args uri:{}, contentLength:{}, destLen:{}, readLen:{}, " + + "position:{}, thread:{}, timeUsedMilliSec:{}", uri, contentLength, length, bytesRead, position, threadId, endTime - startTime); return bytesRead; } @Override - public synchronized void setReadahead(final Long newReadaheadRange) - throws IOException { + public synchronized void setReadahead(final Long newReadaheadRange) throws IOException { fs.checkOpen(); checkStreamOpen(); if (newReadaheadRange == null) { this.readAheadRange = OBSConstants.DEFAULT_READAHEAD_RANGE; } else { - Preconditions.checkArgument(newReadaheadRange >= 0, - "Negative readahead value"); + Preconditions.checkArgument(newReadaheadRange >= 0, "Negative readahead value"); this.readAheadRange = newReadaheadRange; } } diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/ReadAheadBuffer.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/ReadAheadBuffer.java new file mode 100644 index 0000000..10271c5 --- /dev/null +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/ReadAheadBuffer.java @@ -0,0 +1,71 @@ +package org.apache.hadoop.fs.obs.input; + +import java.util.concurrent.locks.Condition; +import java.util.concurrent.locks.ReentrantLock; + +public class ReadAheadBuffer { + public enum STATUS { + INIT, + SUCCESS, + ERROR + } + + private final ReentrantLock lock = new ReentrantLock(); + + private Condition condition = lock.newCondition(); + + private final byte[] buffer; + + private ReadAheadBuffer.STATUS status; + + private long start; + + private long end; + + public ReadAheadBuffer(long bufferStart, long bufferEnd) { + this.buffer = new byte[(int) (bufferEnd - bufferStart) + 1]; + + this.status = ReadAheadBuffer.STATUS.INIT; + this.start = bufferStart; + this.end = bufferEnd; + } + + public void lock() { + lock.lock(); + } + + public void unlock() { + lock.unlock(); + } + + public void await(ReadAheadBuffer.STATUS waitStatus) throws InterruptedException { + while (this.status == waitStatus) { + condition.await(); + } + } + + public void signalAll() { + condition.signalAll(); + } + + public byte[] getBuffer() { + return buffer; + } + + public ReadAheadBuffer.STATUS getStatus() { + return status; + } + + public void setStatus(ReadAheadBuffer.STATUS status) { + this.status = status; + } + + public long getByteStart() { + return start; + } + + public long getByteEnd() { + return end; + } +} + diff --git a/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/ReadAheadTask.java b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/ReadAheadTask.java new file mode 100644 index 0000000..46c0106 --- /dev/null +++ b/hadoop-huaweicloud/src/main/java/org/apache/hadoop/fs/obs/input/ReadAheadTask.java @@ -0,0 +1,103 @@ +package org.apache.hadoop.fs.obs.input; + +import com.obs.services.ObsClient; +import com.obs.services.model.GetObjectRequest; + +import org.apache.hadoop.io.IOUtils; +import org.apache.hadoop.io.retry.RetryPolicies; +import org.apache.hadoop.io.retry.RetryPolicy; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.io.InputStream; +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.TimeUnit; + +/** + * Used by {@link OBSExtendInputStream} as an task that submitted + * to the thread pool. + * Each FileReaderTask reads one part of the file so that + * we can accelerate the sequential read. + */ +public class ReadAheadTask implements Runnable { + public final Logger log = LoggerFactory.getLogger(ReadAheadTask.class); + + private String bucketName; + + private String key; + + private ObsClient client; + + private ReadAheadBuffer buffer; + + private static final int MAX_RETRIES = 3; + + private RetryPolicy retryPolicy; + + public ReadAheadTask(String bucketName, String key, ObsClient client, ReadAheadBuffer buffer) { + this.bucketName = bucketName; + this.key = key; + this.client = client; + this.buffer = buffer; + RetryPolicy defaultPolicy = RetryPolicies.retryUpToMaximumCountWithFixedSleep(MAX_RETRIES, 3, TimeUnit.SECONDS); + Map, RetryPolicy> policies = new HashMap<>(); + policies.put(IOException.class, defaultPolicy); + policies.put(IndexOutOfBoundsException.class, RetryPolicies.TRY_ONCE_THEN_FAIL); + policies.put(NullPointerException.class, RetryPolicies.TRY_ONCE_THEN_FAIL); + + this.retryPolicy = RetryPolicies.retryByException(defaultPolicy, policies); + } + + private boolean shouldRetry(Exception e, int retries) { + boolean shouldRetry = true; + try { + RetryPolicy.RetryAction retry = retryPolicy.shouldRetry(e, retries, 0, true); + if (retry.action == RetryPolicy.RetryAction.RetryDecision.RETRY) { + Thread.sleep(retry.delayMillis); + } else { + //should not retry + shouldRetry = false; + } + } catch (Exception ex) { + //FAIL + log.warn("Exception thrown when call shouldRetry, exception " + ex); + shouldRetry = false; + } + return shouldRetry; + } + + @Override + public void run() { + int retries = 0; + buffer.lock(); + try { + GetObjectRequest request = new GetObjectRequest(bucketName, key); + request.setRangeStart(buffer.getByteStart()); + request.setRangeEnd(buffer.getByteEnd()); + while (true) { + try (InputStream in = client.getObject(request).getObjectContent()) { + IOUtils.readFully(in, buffer.getBuffer(), 0, buffer.getBuffer().length); + buffer.setStatus(ReadAheadBuffer.STATUS.SUCCESS); + break; + } catch (Exception e) { + log.warn("Exception thrown when retrieve key: " + this.key + ", exception: " + e); + retries++; + if (!shouldRetry(e, retries)) { + break; + } + } + } + + if (buffer.getStatus() != ReadAheadBuffer.STATUS.SUCCESS) { + buffer.setStatus(ReadAheadBuffer.STATUS.ERROR); + } + + //notify main thread which wait for this buffer + buffer.signalAll(); + } finally { + buffer.unlock(); + } + } +} \ No newline at end of file diff --git a/release/hadoop-huaweicloud-2.8.3-hw-43.jar b/release/hadoop-huaweicloud-2.8.3-hw-43.jar new file mode 100644 index 0000000..1564f55 Binary files /dev/null and b/release/hadoop-huaweicloud-2.8.3-hw-43.jar differ diff --git a/release/hadoop-huaweicloud-2.8.3-hw-45.jar b/release/hadoop-huaweicloud-2.8.3-hw-45.jar new file mode 100644 index 0000000..e21c211 Binary files /dev/null and b/release/hadoop-huaweicloud-2.8.3-hw-45.jar differ diff --git a/release/hadoop-huaweicloud-3.1.1-hw-43.jar b/release/hadoop-huaweicloud-3.1.1-hw-43.jar new file mode 100644 index 0000000..293e08c Binary files /dev/null and b/release/hadoop-huaweicloud-3.1.1-hw-43.jar differ diff --git a/release/hadoop-huaweicloud-3.1.1-hw-45.jar b/release/hadoop-huaweicloud-3.1.1-hw-45.jar new file mode 100644 index 0000000..38a781b Binary files /dev/null and b/release/hadoop-huaweicloud-3.1.1-hw-45.jar differ