Skip to content

HDFS-17798. Fixed the issue where bad replicas in the mini cluster could not be automatically replicated #7749

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: branch-3.3
Choose a base branch
from

Conversation

gp1314
Copy link
Contributor

@gp1314 gp1314 commented Jun 19, 2025

Description of PR

In a 3-datanode cluster with a 3-replica block, if one replica on a node becomes corrupted (and this corruption did not occur during the write process), it will result in:
The corrupted replica cannot be removed from the damaged node.
Due to the missing replica, replication reconstruction tasks will continuously attempt to replicate the corrupted replica.
However, during reconstruction, nodes already hosting a replica of this block are excluded—meaning all 3 datanodes are excluded.
This prevents the selection of a suitable target node for replication, eventually creating a ​vicious cycle.

reproduction

Or execute in the 3datanode cluster in the following order
Find a normal block with three replicas and destroy the replica file of one of its Datanodes
When waiting for the datanode disk scan cycle, it will be found that there is a damaged copy and it cannot be rebuilt by other copies
In the TestBlockManager add testMiniClusterCannotReconstructionWhileReplicaAnomaly

  @Test(timeout = 60000)
  public void testMiniClusterCannotReconstructionWhileReplicaAnomaly() 
      throws IOException, InterruptedException, TimeoutException {
    Configuration conf = new HdfsConfiguration();
    conf.setInt("dfs.datanode.directoryscan.interval", DN_DIRECTORYSCAN_INTERVAL);
    conf.setInt("dfs.namenode.replication.interval", 1);
    conf.setInt("dfs.heartbeat.interval", 1);
    String src = "/test-reconstruction";
    Path file = new Path(src);
    MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
    try {
      cluster.waitActive();
      FSNamesystem fsn = cluster.getNamesystem();
      BlockManager bm = fsn.getBlockManager();
      
      FSDataOutputStream out = null;
      FileSystem fs = cluster.getFileSystem();
      try {
        out = fs.create(file);
        for (int i = 0; i < 1024 * 1; i++) {
          out.write(i);
        }
        out.hflush();
      } finally {
        IOUtils.closeStream(out);
      }
      
      FSDataInputStream in = null;
      ExtendedBlock oldBlock = null;
      try {
        in = fs.open(file);
        oldBlock = DFSTestUtil.getAllBlocks(in).get(0).getBlock();
      } finally {
        IOUtils.closeStream(in);
      }
      DataNode dn = cluster.getDataNodes().get(0);
      String blockPath = dn.getFSDataset().getBlockLocalPathInfo(oldBlock).getBlockPath();
      String metaBlockPath = dn.getFSDataset().getBlockLocalPathInfo(oldBlock).getMetaPath();
      Files.write(Paths.get(blockPath), Collections.emptyList());
      Files.write(Paths.get(metaBlockPath), Collections.emptyList());
      cluster.restartDataNode(0, true);
      cluster.waitDatanodeConnectedToActive(dn, 60000);
      while(!dn.isDatanodeFullyStarted()) {
        Thread.sleep(1000);
      }
      Thread.sleep(DN_DIRECTORYSCAN_INTERVAL * 1000);
      cluster.triggerBlockReports();
      BlockInfo bi = bm.getStoredBlock(oldBlock.getLocalBlock());
      assertTrue(bm.isNeededReconstruction(bi,
          bm.countNodes(bi, cluster.getNamesystem().isInStartupSafeMode())));
      BlockReconstructionWork reconstructionWork = null;
      fsn.readLock();
      try {
        reconstructionWork = bm.scheduleReconstruction(bi, 3);
      } finally {
        fsn.readUnlock();
      }
      assertNotNull(reconstructionWork);
      assertEquals(reconstructionWork.getContainingNodes().size(), 3);
     
    } finally {
      if (cluster != null) {
        cluster.shutdown();
      }
    }
  }

How was this patch tested?

Unit test

For code changes:

  • Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
  • Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
  • If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
  • If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?

@gp1314 gp1314 changed the base branch from trunk to branch-3.3 June 19, 2025 03:59
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 4m 1s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 1 new or modified test files.
_ branch-3.3 Compile Tests _
+1 💚 mvninstall 33m 18s branch-3.3 passed
+1 💚 compile 0m 49s branch-3.3 passed
+1 💚 checkstyle 0m 32s branch-3.3 passed
+1 💚 mvnsite 0m 53s branch-3.3 passed
+1 💚 javadoc 1m 6s branch-3.3 passed
+1 💚 spotbugs 1m 53s branch-3.3 passed
+1 💚 shadedclient 21m 32s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+1 💚 mvninstall 0m 48s the patch passed
+1 💚 compile 0m 40s the patch passed
+1 💚 javac 0m 40s the patch passed
-1 ❌ blanks 0m 0s /blanks-eol.txt The patch has 3 line(s) that end in blanks. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
-0 ⚠️ checkstyle 0m 24s /results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 118 unchanged - 0 fixed = 120 total (was 118)
+1 💚 mvnsite 0m 45s the patch passed
+1 💚 javadoc 0m 54s the patch passed
+1 💚 spotbugs 1m 48s the patch passed
+1 💚 shadedclient 20m 54s patch has no errors when building and testing our client artifacts.
_ Other Tests _
-1 ❌ unit 167m 47s /patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt hadoop-hdfs in the patch passed.
+1 💚 asflicense 0m 30s The patch does not generate ASF License warnings.
257m 31s
Reason Tests
Failed junit tests hadoop.hdfs.protocol.TestBlockListAsLongs
hadoop.hdfs.TestFileCreation
hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized
hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage
hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport
hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl
hadoop.hdfs.server.namenode.TestProcessCorruptBlocks
hadoop.hdfs.server.namenode.TestAddStripedBlocks
hadoop.hdfs.server.datanode.TestLargeBlockReport
Subsystem Report/Notes
Docker ClientAPI=1.50 ServerAPI=1.50 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7749/1/artifact/out/Dockerfile
GITHUB PR #7749
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux ff2a99966eca 5.15.0-136-generic #147-Ubuntu SMP Sat Mar 15 15:53:30 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision branch-3.3 / b7b6c43
Default Java Private Build-1.8.0_362-8u372-gaus1-0ubuntu118.04-b09
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7749/1/testReport/
Max. process+thread count 4374 (vs. ulimit of 5500)
modules C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7749/1/console
versions git=2.17.1 maven=3.6.0 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants