Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discrepancies in the Output of the k_hop_subgraph Function: Unexpected Additional Edges #10040

Open
1234238 opened this issue Feb 17, 2025 · 0 comments
Labels

Comments

@1234238
Copy link

1234238 commented Feb 17, 2025

🐛 Describe the bug

Specifically, I used the k_hop_subgraph function to find the one-hop neighbors of a target node. My target nodes are defined as torch.arange(gia_test.y.shape[0], gia_test.x.shape[0]), where gia_test.x represents the original dataset with some newly added nodes. Naturally, these new nodes introduce additional connections; however, I did not assign them any labels. My goal is to identify the original nodes to which these new nodes are connected.

To achieve this, I executed the following code. However, I was surprised to find that the resulting edge_index contained some extra nodes that I did not expect. I am unsure why this occurs—could this behavior be considered normal?

print(gia_test)
subset, gia_tar_edge_index, _, _ = k_hop_subgraph(torch.arange(gia_test.y.shape[0], gia_test.x.shape[0]), 1, gia_test.edge_index, relabel_nodes=False)

print(torch.arange(gia_test.y.shape[0], gia_test.x.shape[0]))
sorted_edges = torch.sort(gia_tar_edge_index, dim=0)[0]
directed_edge_index = torch.unique(sorted_edges, dim=1)

mask = directed_edge_index[1] < gia_test.y.shape[0]
print(directed_edge_index[:,mask])
Data(x=[2880, 302], edge_index=[2, 11496], y=[2680, 1], train_mask=[2680], val_mask=[2680], test_mask=[2680], target_idx=[804])
tensor([2680, 2681, 2682, 2683, 2684, 2685, 2686, 2687, 2688, 2689, 2690, 2691,
        2692, 2693, 2694, 2695, 2696, 2697, 2698, 2699, 2700, 2701, 2702, 2703,
        2704, 2705, 2706, 2707, 2708, 2709, 2710, 2711, 2712, 2713, 2714, 2715,
        2716, 2717, 2718, 2719, 2720, 2721, 2722, 2723, 2724, 2725, 2726, 2727,
        2728, 2729, 2730, 2731, 2732, 2733, 2734, 2735, 2736, 2737, 2738, 2739,
        2740, 2741, 2742, 2743, 2744, 2745, 2746, 2747, 2748, 2749, 2750, 2751,
        2752, 2753, 2754, 2755, 2756, 2757, 2758, 2759, 2760, 2761, 2762, 2763,
        2764, 2765, 2766, 2767, 2768, 2769, 2770, 2771, 2772, 2773, 2774, 2775,
        2776, 2777, 2778, 2779, 2780, 2781, 2782, 2783, 2784, 2785, 2786, 2787,
        2788, 2789, 2790, 2791, 2792, 2793, 2794, 2795, 2796, 2797, 2798, 2799,
        2800, 2801, 2802, 2803, 2804, 2805, 2806, 2807, 2808, 2809, 2810, 2811,
        2812, 2813, 2814, 2815, 2816, 2817, 2818, 2819, 2820, 2821, 2822, 2823,
        2824, 2825, 2826, 2827, 2828, 2829, 2830, 2831, 2832, 2833, 2834, 2835,
        2836, 2837, 2838, 2839, 2840, 2841, 2842, 2843, 2844, 2845, 2846, 2847,
        2848, 2849, 2850, 2851, 2852, 2853, 2854, 2855, 2856, 2857, 2858, 2859,
        2860, 2861, 2862, 2863, 2864, 2865, 2866, 2867, 2868, 2869, 2870, 2871,
        2872, 2873, 2874, 2875, 2876, 2877, 2878, 2879])
tensor([[  31,   64,   70,  258,  379,  409,  461,  569,  614,  719,  778,  915,
          936,  964,  971, 1015, 1044, 1145, 1274, 1349, 1408, 1448, 1561, 1667,
         1671, 1671, 1693, 1805, 2608],
        [1582, 1203, 2164, 2511, 2543, 1174, 1045, 2407, 2420, 1031, 2380, 2048,
         2153, 2633, 1485, 2356, 2311, 1237, 2586, 1601, 2240, 2198, 2671, 2550,
         2462, 2489, 2559, 1934, 2609]])

Versions

Collecting environment information...
PyTorch version: 2.0.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.3) 9.4.0
Clang version: Could not collect
CMake version: version 3.25.0
Libc version: glibc-2.31

Python version: 3.8.8 (default, Apr 13 2021, 19:58:26)  [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.3.58
CUDA_MODULE_LOADING set to: LAZY

Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] numpy==1.24.4
[pip3] torch==2.0.0+cu118
[pip3] torch-cluster==1.6.3+pt20cu118
[pip3] torch-geometric==2.6.1
[pip3] torch-scatter==2.1.2+pt20cu118
[pip3] torch-sparse==0.6.18+pt20cu118
[pip3] torch-spline-conv==1.2.2+pt20cu118
[pip3] torchaudio==2.0.1+cu118
[pip3] torchvision==0.15.1+cu118
[pip3] triton==2.0.0
[conda] numpy                     1.24.4                   pypi_0    pypi
[conda] torch                     2.0.0+cu118              pypi_0    pypi
[conda] torch-cluster             1.6.3+pt20cu118          pypi_0    pypi
[conda] torch-geometric           2.6.1                    pypi_0    pypi
[conda] torch-scatter             2.1.2+pt20cu118          pypi_0    pypi
[conda] torch-sparse              0.6.18+pt20cu118          pypi_0    pypi
[conda] torch-spline-conv         1.2.2+pt20cu118          pypi_0    pypi
[conda] torchaudio                2.0.1+cu118              pypi_0    pypi
[conda] torchvision               0.15.1+cu118             pypi_0    pypi
[conda] triton                    2.0.0                    pypi_0    pypi
@1234238 1234238 added the bug label Feb 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant