• Corpus ID: 4755450

Inductive Representation Learning on Large Graphs

  • William L. Hamilton , Z. Ying , J. Leskovec
  • Published in Neural Information Processing… 7 June 2017
  • Computer Science

Figures and Tables from this paper

figure 1

12,697 Citations

Deep gaussian embedding of graphs: unsupervised inductive learning via ranking.

  • Highly Influenced

Inductive Graph Embeddings through Locality Encodings

Ness: node embeddings from static subgraphs.

  • 13 Excerpts

Fisher Information Embedding for Node and Graph Learning

Bayes embedding (bem): refining representation by integrating knowledge graphs and behavior-specific networks, surreal: subgraph robust representation learning, meta-inductive node classification across graphs, survey on graph embeddings and their applications to machine learning problems on graphs.

  • 11 Excerpts

Embedding Global and Local Influences for Dynamic Graphs

Gvnp: global vectors for node representation, 42 references, node2vec: scalable feature learning for networks.

  • Highly Influential

Revisiting Semi-Supervised Learning with Graph Embeddings

Line: large-scale information network embedding, grarep: learning graph representations with global structural information, semi-supervised classification with graph convolutional networks.

  • 22 Excerpts

Weisfeiler-Lehman Graph Kernels

Discriminative embeddings of latent variable models for structured data, gated graph sequence neural networks, deepwalk: online learning of social representations, a new model for learning in graph domains, related papers.

Showing 1 through 3 of 0 Related Papers

Inductive Representation Learning on Large Graphs

Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node’s local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.

1 Introduction

Low-dimensional vector embeddings of nodes in large graphs 1 1 1 While it is common to refer to these data structures as social or biological networks , we use the term graph to avoid ambiguity with neural network terminology. have proved extremely useful as feature inputs for a wide variety of prediction and graph analysis tasks [ 5 , 11 , 28 , 35 , 36 ] . The basic idea behind node embedding approaches is to use dimensionality reduction techniques to distill the high-dimensional information about a node’s graph neighborhood into a dense vector embedding. These node embeddings can then be fed to downstream machine learning systems and aid in tasks such as node classification, clustering, and link prediction [ 11 , 28 , 35 ] .

However, previous works have focused on embedding nodes from a single fixed graph, and many real-world applications require embeddings to be quickly generated for unseen nodes, or entirely new (sub)graphs. This inductive capability is essential for high-throughput, production machine learning systems, which operate on evolving graphs and constantly encounter unseen nodes (e.g., posts on Reddit, users and videos on Youtube). An inductive approach to generating node embeddings also facilitates generalization across graphs with the same form of features: for example, one could train an embedding generator on protein-protein interaction graphs derived from a model organism, and then easily produce node embeddings for data collected on new organisms using the trained model.

The inductive node embedding problem is especially difficult, compared to the transductive setting, because generalizing to unseen nodes requires “aligning” newly observed subgraphs to the node embeddings that the algorithm has already optimized on. An inductive framework must learn to recognize structural properties of a node’s neighborhood that reveal both the node’s local role in the graph, as well as its global position.

Most existing approaches to generating node embeddings are inherently transductive. The majority of these approaches directly optimize the embeddings for each node using matrix-factorization-based objectives, and do not naturally generalize to unseen data, since they make predictions on nodes in a single, fixed graph [ 5 , 11 , 23 , 28 , 35 , 36 , 37 , 39 ] . These approaches can be modified to operate in an inductive setting (e.g., [ 28 ] ), but these modifications tend to be computationally expensive, requiring additional rounds of gradient descent before new predictions can be made. There are also recent approaches to learning over graph structures using convolution operators that offer promise as an embedding methodology [ 17 ] . So far, graph convolutional networks (GCNs) have only been applied in the transductive setting with fixed graphs [ 17 , 18 ] . In this work we both extend GCNs to the task of inductive unsupervised learning and propose a framework that generalizes the GCN approach to use trainable aggregation functions (beyond simple convolutions).

Present work . We propose a general framework, called GraphSAGE ( sa mple and aggre g at e ), for inductive node embedding. Unlike embedding approaches that are based on matrix factorization, we leverage node features (e.g., text attributes, node profile information, node degrees) in order to learn an embedding function that generalizes to unseen nodes. By incorporating node features in the learning algorithm, we simultaneously learn the topological structure of each node’s neighborhood as well as the distribution of node features in the neighborhood. While we focus on feature-rich graphs (e.g., citation data with text attributes, biological data with functional/molecular markers), our approach can also make use of structural features that are present in all graphs (e.g., node degrees). Thus, our algorithm can also be applied to graphs without node features.

Instead of training a distinct embedding vector for each node, we train a set of aggregator functions that learn to aggregate feature information from a node’s local neighborhood (Figure 1 ). Each aggregator function aggregates information from a different number of hops, or search depth, away from a given node. At test, or inference time, we use our trained system to generate embeddings for entirely unseen nodes by applying the learned aggregation functions. Following previous work on generating node embeddings, we design an unsupervised loss function that allows GraphSAGE to be trained without task-specific supervision. We also show that GraphSAGE can be trained in a fully supervised manner.

Refer to caption

We evaluate our algorithm on three node-classification benchmarks, which test GraphSAGE’s ability to generate useful embeddings on unseen data. We use two evolving document graphs based on citation data and Reddit post data (predicting paper and post categories, respectively), and a multi-graph generalization experiment based on a dataset of protein-protein interactions (predicting protein functions). Using these benchmarks, we show that our approach is able to effectively generate representations for unseen nodes and outperform relevant baselines by a significant margin: across domains, our supervised approach improves classification F1-scores by an average of 51% compared to using node features alone and GraphSAGE consistently outperforms a strong, transductive baseline [ 28 ] , despite this baseline taking ∼ 100 × {\sim}100\times longer to run on unseen nodes. We also show that the new aggregator architectures we propose provide significant gains (7.4% on average) compared to an aggregator inspired by graph convolutional networks [ 17 ] . Lastly, we probe the expressive capability of our approach and show, through theoretical analysis, that GraphSAGE is capable of learning structural information about a node’s role in a graph, despite the fact that it is inherently based on features (Section 5 ).

2 Related work

Our algorithm is conceptually related to previous node embedding approaches, general supervised approaches to learning over graphs, and recent advancements in applying convolutional neural networks to graph-structured data. 2 2 2 In the time between this papers original submission to NIPS 2017 and the submission of the final, accepted (i.e., “camera-ready”) version, there have been a number of closely related (e.g., follow-up) works published on pre-print servers. For temporal clarity, we do not review or compare against these papers in detail.

Factorization-based embedding approaches . There are a number of recent node embedding approaches that learn low-dimensional embeddings using random walk statistics and matrix factorization-based learning objectives [ 5 , 11 , 28 , 35 , 36 ] . These methods also bear close relationships to more classic approaches to spectral clustering [ 23 ] , multi-dimensional scaling [ 19 ] , as well as the PageRank algorithm [ 25 ] . Since these embedding algorithms directly train node embeddings for individual nodes, they are inherently transductive and, at the very least, require expensive additional training (e.g., via stochastic gradient descent) to make predictions on new nodes. In addition, for many of these approaches (e.g., [ 11 , 28 , 35 , 36 ] ) the objective function is invariant to orthogonal transformations of the embeddings, which means that the embedding space does not naturally generalize between graphs and can drift during re-training. One notable exception to this trend is the Planetoid-I algorithm introduced by Yang et al.  [ 40 ] , which is an inductive, embedding-based approach to semi-supervised learning. However, Planetoid-I does not use any graph structural information during inference; instead, it uses the graph structure as a form of regularization during training. Unlike these previous approaches, we leverage feature information in order to train a model to produce embeddings for unseen nodes.

Supervised learning over graphs . Beyond node embedding approaches, there is a rich literature on supervised learning over graph-structured data. This includes a wide variety of kernel-based approaches, where feature vectors for graphs are derived from various graph kernels (see [ 32 ] and references therein). There are also a number of recent neural network approaches to supervised learning over graph structures [ 7 , 10 , 21 , 31 ] . Our approach is conceptually inspired by a number of these algorithms. However, whereas these previous approaches attempt to classify entire graphs (or subgraphs), the focus of this work is generating useful representations for individual nodes.

Graph convolutional networks . In recent years, several convolutional neural network architectures for learning over graphs have been proposed (e.g., [ 4 , 9 , 8 , 17 , 24 ] ). The majority of these methods do not scale to large graphs or are designed for whole-graph classification (or both) [ 4 , 9 , 8 , 24 ] . However, our approach is closely related to the graph convolutional network (GCN), introduced by Kipf et al. [ 17 , 18 ] . The original GCN algorithm [ 17 ] is designed for semi-supervised learning in a transductive setting, and the exact algorithm requires that the full graph Laplacian is known during training. A simple variant of our algorithm can be viewed as an extension of the GCN framework to the inductive setting, a point which we revisit in Section 3.3 .

3 Proposed method: GraphSAGE

The key idea behind our approach is that we learn how to aggregate feature information from a node’s local neighborhood (e.g., the degrees or text attributes of nearby nodes). We first describe the GraphSAGE embedding generation (i.e., forward propagation) algorithm, which generates embeddings for nodes assuming that the GraphSAGE model parameters are already learned (Section 3.1 ). We then describe how the GraphSAGE model parameters can be learned using standard stochastic gradient descent and backpropagation techniques (Section 3.2 ).

3.1 Embedding generation (i.e., forward propagation) algorithm

The intuition behind Algorithm 1 is that at each iteration, or search depth, nodes aggregate information from their local neighbors, and as this process iterates, nodes incrementally gain more and more information from further reaches of the graph.

To extend Algorithm 1 to the minibatch setting, given a set of input nodes, we first forward sample the required neighborhood sets (up to depth K 𝐾 K ) and then we run the inner loop (line 3 in Algorithm 1 ), but instead of iterating over all nodes, we compute only the representations that are necessary to satisfy the recursion at each depth (Appendix A contains complete minibatch pseudocode).

3.2 Learning the parameters of GraphSAGE

(1)

where v 𝑣 v is a node that co-occurs near u 𝑢 u on fixed-length random walk, σ 𝜎 \sigma is the sigmoid function, P n subscript 𝑃 𝑛 P_{n} is a negative sampling distribution, and Q 𝑄 Q defines the number of negative samples. Importantly, unlike previous embedding approaches, the representations 𝐳 u subscript 𝐳 𝑢 \mathbf{z}_{u} that we feed into this loss function are generated from the features contained within a node’s local neighborhood, rather than training a unique embedding for each node (via an embedding look-up).

This unsupervised setting emulates situations where node features are provided to downstream machine learning applications, as a service or in a static repository. In cases where representations are to be used only on a specific downstream task, the unsupervised loss (Equation 1 ) can simply be replaced, or augmented, by a task-specific objective (e.g., cross-entropy loss).

3.3 Aggregator Architectures

Unlike machine learning over N-D lattices (e.g., sentences, images, or 3-D volumes), a node’s neighbors have no natural ordering; thus, the aggregator functions in Algorithm 1 must operate over an unordered set of vectors. Ideally, an aggregator function would be symmetric (i.e., invariant to permutations of its inputs) while still being trainable and maintaining high representational capacity. The symmetry property of the aggregation function ensures that our neural network model can be trained and applied to arbitrarily ordered node neighborhood feature sets. We examined three candidate aggregator functions:

(2)

We call this modified mean-based aggregator convolutional since it is a rough, linear approximation of a localized spectral convolution [ 17 ] . An important distinction between this convolutional aggregator and our other proposed aggregators is that it does not perform the concatenation operation in line 5 of Algorithm 1 —i.e., the convolutional aggregator does concatenate the node’s previous layer representation 𝐡 v k − 1 subscript superscript 𝐡 𝑘 1 𝑣 \mathbf{h}^{k-1}_{v} with the aggregated neighborhood vector 𝐡 𝒩 ​ ( v ) k subscript superscript 𝐡 𝑘 𝒩 𝑣 \mathbf{h}^{k}_{\mathcal{N}(v)} . This concatenation can be viewed as a simple form of a “skip connection” [ 13 ] between the different “search depths”, or “layers” of the GraphSAGE algorithm, and it leads to significant gains in performance (Section 4 ).

LSTM aggregator . We also examined a more complex aggregator based on an LSTM architecture [ 14 ] . Compared to the mean aggregator, LSTMs have the advantage of larger expressive capability. However, it is important to note that LSTMs are not inherently symmetric (i.e., they are not permutation invariant), since they process their inputs in a sequential manner. We adapt LSTMs to operate on an unordered set by simply applying the LSTMs to a random permutation of the node’s neighbors.

Pooling aggregator . The final aggregator we examine is both symmetric and trainable. In this pooling approach, each neighbor’s vector is independently fed through a fully-connected neural network; following this transformation, an elementwise max-pooling operation is applied to aggregate information across the neighbor set:

(3)

where max \max denotes the element-wise max operator and σ 𝜎 \sigma is a nonlinear activation function. In principle, the function applied before the max pooling can be an arbitrarily deep multi-layer perceptron, but we focus on simple single-layer architectures in this work. This approach is inspired by recent advancements in applying neural network architectures to learn over general point sets [ 29 ] . Intuitively, the multi-layer perceptron can be thought of as a set of functions that compute features for each of the node representations in the neighbor set. By applying the max-pooling operator to each of the computed features, the model effectively captures different aspects of the neighborhood set. Note also that, in principle, any symmetric vector function could be used in place of the max \max operator (e.g., an element-wise mean). We found no significant difference between max- and mean-pooling in developments test and thus focused on max-pooling for the rest of our experiments.

4 Experiments

We test the performance of GraphSAGE on three benchmark tasks: (i) classifying academic papers into different subjects using the Web of Science citation dataset, (ii) classifying Reddit posts as belonging to different communities, and (iii) classifying protein functions across various biological protein-protein interaction (PPI) graphs. Sections 4.1 and 4.2 summarize the datasets, and the supplementary material contains additional information. In all these experiments, we perform predictions on nodes that are not seen during training, and, in the case of the PPI dataset, we test on entirely unseen graphs.

Experimental set-up . To contextualize the empirical results on our inductive benchmarks, we compare against four baselines: a random classifer, a logistic regression feature-based classifier (that ignores graph structure), the DeepWalk algorithm [ 28 ] as a representative factorization-based approach, and a concatenation of the raw features and DeepWalk embeddings. We also compare four variants of GraphSAGE that use the different aggregator functions (Section 3.3 ). Since, the “convolutional” variant of GraphSAGE is an extended, inductive version of Kipf et al’s semi-supervised GCN [ 17 ] , we term this variant GraphSAGE-GCN. We test unsupervised variants of GraphSAGE  trained according to the loss in Equation ( 1 ), as well as supervised variants that are trained directly on classification cross-entropy loss. For all the GraphSAGE variants we used rectified linear units as the non-linearity and set K = 2 𝐾 2 K=2 with neighborhood sample sizes S 1 = 25 subscript 𝑆 1 25 S_{1}=25 and S 2 = 10 subscript 𝑆 2 10 S_{2}=10 (see Section 4.4 for sensitivity analyses).

For the Reddit and citation datasets, we use “online” training for DeepWalk as described in Perozzi et al.  [ 28 ] , where we run a new round of SGD optimization to embed the new test nodes before making predictions (see the Appendix for details). In the multi-graph setting, we cannot apply DeepWalk, since the embedding spaces generated by running the DeepWalk algorithm on different disjoint graphs can be arbitrarily rotated with respect to each other (Appendix D ).

All models were implemented in TensorFlow [ 1 ] with the Adam optimizer [ 16 ] (except DeepWalk, which performed better with the vanilla gradient descent optimizer). We designed our experiments with the goals of (i) verifying the improvement of GraphSAGE over the baseline approaches (i.e., raw features and DeepWalk) and (ii) providing a rigorous comparison of the different GraphSAGE aggregator architectures. In order to provide a fair comparison, all models share an identical implementation of their minibatch iterators, loss function and neighborhood sampler (when applicable). Moreover, in order to guard against unintentional “hyperparameter hacking” in the comparisons between GraphSAGE aggregators, we sweep over the same set of hyperparameters for all GraphSAGE variants (choosing the best setting for each variant according to performance on a validation set). The set of possible hyperparameter values was determined on early validation tests using subsets of the citation and Reddit data that we then discarded from our analyses. The appendix contains further implementation details. 5 5 5 Code and links to the datasets: http://snap.stanford.edu/graphsage/

4.1 Inductive learning on evolving graphs: Citation and Reddit data

Citation Reddit PPI
Name Unsup. F1 Sup. F1 Unsup. F1 Sup. F1 Unsup. F1 Sup. F1
Random 0.206 0.206 0.043 0.042 0.396 0.396
Raw features 0.575 0.575 0.585 0.585 0.422 0.422
DeepWalk 0.565 0.565 0.324 0.324
DeepWalk + features 0.701 0.701 0.691 0.691
GraphSAGE-GCN 0.742 0.772 0.908 0.930 0.465 0.500
GraphSAGE-mean 0.778 0.820 0.897 0.950 0.486 0.598
GraphSAGE-LSTM 0.788 0.832 0.907 0.954 0.482 0.612
GraphSAGE-pool 0.798 0.839 0.892 0.948 0.502 0.600
% gain over feat. 39% 46% 55% 63% 19% 45%

Refer to caption

Our first two experiments are on classifying nodes in evolving information graphs, a task that is especially relevant to high-throughput production systems, which constantly encounter unseen data.

Citation data . Our first task is predicting paper subject categories on a large citation dataset. We use an undirected citation graph dataset derived from the Thomson Reuters Web of Science Core Collection, corresponding to all papers in six biology-related fields for the years 2000-2005. The node labels for this dataset correspond to the six different field labels. In total, this is dataset contains 302,424 nodes with an average degree of 9.15. We train all the algorithms on the 2000-2004 data and use the 2005 data for testing (with 30% used for validation). For features, we used node degrees and processed the paper abstracts according Arora et al.’s  [ 2 ] sentence embedding approach, with 300-dimensional word vectors trained using the GenSim word2vec implementation [ 30 ] .

Reddit data . In our second task, we predict which community different Reddit posts belong to. Reddit is a large online discussion forum where users post and comment on content in different topical communities. We constructed a graph dataset from Reddit posts made in the month of September, 2014. The node label in this case is the community, or “subreddit”, that a post belongs to. We sampled 50 large communities and built a post-to-post graph, connecting posts if the same user comments on both. In total this dataset contains 232,965 posts with an average degree of 492. We use the first 20 days for training and the remaining days for testing (with 30% used for validation). For features, we use off-the-shelf 300-dimensional GloVe CommonCrawl word vectors [ 27 ] ; for each post, we concatenated (i) the average embedding of the post title, (ii) the average embedding of all the post’s comments (iii) the post’s score, and (iv) the number of comments made on the post.

The first four columns of Table 1 summarize the performance of GraphSAGE  as well as the baseline approaches on these two datasets. We find that GraphSAGE outperforms all the baselines by a significant margin, and the trainable, neural network aggregators provide significant gains compared to the GCN approach. For example, the unsupervised variant GraphSAGE-pool outperforms the concatenation of the DeepWalk embeddings and the raw features by 13.8% on the citation data and 29.1% on the Reddit data, while the supervised version provides a gain of 19.7% and 37.2%, respectively. Interestingly, the LSTM based aggregator shows strong performance, despite the fact that it is designed for sequential data and not unordered sets. Lastly, we see that the performance of unsupervised GraphSAGE is reasonably competitive with the fully supervised version, indicating that our framework can achieve strong performance without task-specific fine-tuning.

4.2 Generalizing across graphs: Protein-protein interactions

We now consider the task of generalizing across graphs, which requires learning about node roles rather than community structure. We classify protein roles—in terms of their cellular functions from gene ontology—in various protein-protein interaction (PPI) graphs, with each graph corresponding to a different human tissue [ 41 ] . We use positional gene sets, motif gene sets and immunological signatures as features and gene ontology sets as labels (121 in total), collected from the Molecular Signatures Database [ 34 ] . The average graph contains 2373 nodes, with an average degree of 28.8. We train all algorithms on 20 graphs and then average prediction F1 scores on two test graphs (with two other graphs used for validation).

The final two columns of Table 1 summarize the accuracies of the various approaches on this data. Again we see that GraphSAGE significantly outperforms the baseline approaches, with the LSTM- and pooling-based aggregators providing substantial gains over the mean- and GCN-based aggregators. 6 6 6 Note that in very recent follow-up work Chen and Zhu [ 6 ] achieve superior performance by optimizing the GraphSAGE hyperparameters specifically for the PPI task and implementing new training techniques (e.g., dropout, layer normalization, and a new sampling scheme). We refer the reader to their work for the current state-of-the-art numbers on the PPI dataset that are possible using a variant of the GraphSAGE approach.

4.3 Runtime and parameter sensitivity

Figure 2 .A summarizes the training and test runtimes for the different approaches. The training time for the methods are comparable (with GraphSAGE-LSTM being the slowest). However, the need to sample new random walks and run new rounds of SGD to embed unseen nodes makes DeepWalk 100 - 500 × 100\text{-}500\times slower at test time.

For the GraphSAGE variants, we found that setting K = 2 𝐾 2 K=2 provided a consistent boost in accuracy of around 10 ​ - ​ 15 % 10 - percent 15 10\text{-}15\% , on average, compared to K = 1 𝐾 1 K=1 ; however, increasing K 𝐾 K beyond 2 gave marginal returns in performance ( 0 ​ - ​ 5 % 0 - percent 5 0\text{-}5\% ) while increasing the runtime by a prohibitively large factor of 10 - 100 × 10\text{-}100{\times} , depending on the neighborhood sample size. We also found diminishing returns for sampling large neighborhoods (Figure 2 .B). Thus, despite the higher variance induced by sub-sampling neighborhoods, GraphSAGE is still able to maintain strong predictive accuracy, while significantly improving the runtime.

4.4 Summary comparison between the different aggregator architectures

Overall, we found that the LSTM- and pool-based aggregators performed the best, in terms of both average performance and number of experimental settings where they were the top-performing method (Table 1 ). To give more quantitative insight into these trends, we consider each of the six different experimental settings (i.e., (3 datasets) × ( unsupervised vs. supervised ) (3 datasets) unsupervised vs. supervised \textrm{(3 datasets)}\times(\textrm{unsupervised vs.\ supervised}) ) as trials and consider what performance trends are likely to generalize. In particular, we use the non-parametric Wilcoxon Signed-Rank Test [ 33 ] to quantify the differences between the different aggregators across trials, reporting the T 𝑇 T -statistic and p 𝑝 p -value where applicable. Note that this method is rank-based and essentially tests whether we would expect one particular approach to outperform another in a new experimental setting. Given our small sample size of only 6 different settings, this significance test is somewhat underpowered; nonetheless, the T 𝑇 T -statistic and associated p 𝑝 p -values are useful quantitative measures to assess the aggregators’ relative performances.

We see that LSTM-, pool- and mean-based aggregators all provide statistically significant gains over the GCN-based approach ( T = 1.0 𝑇 1.0 T=1.0 , p = 0.02 𝑝 0.02 p=0.02 for all three). However, the gains of the LSTM and pool approaches over the mean-based aggregator are more marginal ( T = 1.5 𝑇 1.5 T=1.5 , p = 0.03 𝑝 0.03 p=0.03 , comparing LSTM to mean; T = 4.5 𝑇 4.5 T=4.5 , p = 0.10 𝑝 0.10 p=0.10 , comparing pool to mean). There is no significant difference between the LSTM and pool approaches ( T = 10.0 𝑇 10.0 T=10.0 , p = 0.46 𝑝 0.46 p=0.46 ). However, GraphSAGE-LSTM is significantly slower than GraphSAGE-pool (by a factor of ≈ 2 × {\approx}2{\times} ), perhaps giving the pooling-based aggregator a slight edge overall.

5 Theoretical analysis

In this section, we probe the expressive capabilities of GraphSAGE in order to provide insight into how GraphSAGE can learn about graph structure, even though it is inherently based on features. As a case-study, we consider whether GraphSAGE can learn to predict the clustering coefficient of a node, i.e., the proportion of triangles that are closed within the node’s 1-hop neighborhood [ 38 ] . The clustering coefficient is a popular measure of how clustered a node’s local neighborhood is, and it serves as a building block for many more complicated structural motifs [ 3 ] . We can show that Algorithm 1 is capable of approximating clustering coefficients to an arbitrary degree of precision:

Theorem 1 .

C\in\mathbb{R}^{+} such that ‖ 𝐱 v − 𝐱 v ′ ‖ 2 > C subscript norm subscript 𝐱 𝑣 subscript 𝐱 superscript 𝑣 ′ 2 𝐶 \|\mathbf{x}_{v}-\mathbf{x}_{v^{\prime}}\|_{2}>C for all pairs of nodes. Then we have that ∀ ϵ > 0 for-all italic-ϵ 0 \forall\epsilon>0 there exists a parameter setting 𝚯 ∗ superscript 𝚯 \mathbf{\Theta}^{*} for Algorithm 1 such that after K = 4 𝐾 4 K=4 iterations

where z v ∈ ℝ subscript 𝑧 𝑣 ℝ z_{v}\in\mathbb{R} are final output values generated by Algorithm 1 and c v subscript 𝑐 𝑣 c_{v} are node clustering coefficients.

Theorem 1 states that for any graph there exists a parameter setting for Algorithm 1 such that it can approximate clustering coefficients in that graph to an arbitrary precision, if the features for every node are distinct (and if the model is sufficiently high-dimensional). The full proof of Theorem 1 is in the Appendix. Note that as a corollary of Theorem 1, GraphSAGE can learn about local graph structure, even when the node feature inputs are sampled from an absolutely continuous random distribution (see the Appendix for details). The basic idea behind the proof is that if each node has a unique feature representation, then we can learn to map nodes to indicator vectors and identify node neighborhoods. The proof of Theorem 1 relies on some properties of the pooling aggregator, which also provides insight into why GraphSAGE-pool outperforms the GCN and mean-based aggregators.

6 Conclusion

We introduced a novel approach that allows embeddings to be efficiently generated for unseen nodes. GraphSAGE consistently outperforms state-of-the-art baselines, effectively trades off performance and runtime by sampling node neighborhoods, and our theoretical analysis provides insight into how our approach can learn about local graph structures. A number of extensions and potential improvements are possible, such as extending GraphSAGE to incorporate directed or multi-modal graphs. A particularly interesting direction for future work is exploring non-uniform neighborhood sampling functions, and perhaps even learning these functions as part of the GraphSAGE optimization.

Acknowledgments

The authors thank Austin Benson, Aditya Grover, Bryan He, Dan Jurafsky, Alex Ratner, Marinka Zitnik, and Daniel Selsam for their helpful discussions and comments on early drafts. The authors would also like to thank Ben Johnson for his many useful questions and comments on our code and Nikhil Mehta and Yuhui Ding for catching some minor errors in a previous version of the appendix. This research has been supported in part by NSF IIS-1149837, DARPA SIMPLEX, Stanford Data Science Initiative, Huawei, and Chan Zuckerberg Biohub. WLH was also supported by the SAP Stanford Graduate Fellowship and an NSERC PGS-D grant. The views and conclusions expressed in this material are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the above funding agencies, corporations, or the U.S. and Canadian governments.

  • [1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint , 2016.
  • [2] S. Arora, Y. Liang, and T. Ma. A simple but tough-to-beat baseline for sentence embeddings. In ICLR , 2017.
  • [3] A. R. Benson, D. F. Gleich, and J. Leskovec. Higher-order organization of complex networks. Science , 353(6295):163–166, 2016.
  • [4] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral networks and locally connected networks on graphs. In ICLR , 2014.
  • [5] S. Cao, W. Lu, and Q. Xu. Grarep: Learning graph representations with global structural information. In KDD , 2015.
  • [6] J. Chen and J. Zhu. Stochastic training of graph convolutional networks. arXiv preprint arXiv:1710.10568 , 2017.
  • [7] H. Dai, B. Dai, and L. Song. Discriminative embeddings of latent variable models for structured data. In ICML , 2016.
  • [8] M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In NIPS , 2016.
  • [9] D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams. Convolutional networks on graphs for learning molecular fingerprints. In NIPS , 2015.
  • [10] M. Gori, G. Monfardini, and F. Scarselli. A new model for learning in graph domains. In IEEE International Joint Conference on Neural Networks , volume 2, pages 729–734, 2005.
  • [11] A. Grover and J. Leskovec. node2vec: Scalable feature learning for networks. In KDD , 2016.
  • [12] W. L. Hamilton, J. Leskovec, and D. Jurafsky. Diachronic word embeddings reveal statistical laws of semantic change. In ACL , 2016.
  • [13] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In EACV , 2016.
  • [14] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation , 9(8):1735–1780, 1997.
  • [15] K. Hornik. Approximation capabilities of multilayer feedforward networks. Neural Networks , 4(2):251–257, 1991.
  • [16] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR , 2015.
  • [17] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In ICLR , 2016.
  • [18] T. N. Kipf and M. Welling. Variational graph auto-encoders. In NIPS Workshop on Bayesian Deep Learning , 2016.
  • [19] J. B. Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika , 29(1):1–27, 1964.
  • [20] O. Levy and Y. Goldberg. Neural word embedding as implicit matrix factorization. In NIPS , 2014.
  • [21] Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel. Gated graph sequence neural networks. In ICLR , 2015.
  • [22] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In NIPS , 2013.
  • [23] A. Y. Ng, M. I. Jordan, Y. Weiss, et al. On spectral clustering: Analysis and an algorithm. In NIPS , 2001.
  • [24] M. Niepert, M. Ahmed, and K. Kutzkov. Learning convolutional neural networks for graphs. In ICML , 2016.
  • [25] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab, 1999.
  • [26] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research , 12:2825–2830, 2011.
  • [27] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In EMNLP , 2014.
  • [28] B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: Online learning of social representations. In KDD , 2014.
  • [29] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In CVPR , 2017.
  • [30] R. Řehůřek and P. Sojka. Software Framework for Topic Modelling with Large Corpora. In LREC , 2010.
  • [31] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model. IEEE Transactions on Neural Networks , 20(1):61–80, 2009.
  • [32] N. Shervashidze, P. Schweitzer, E. J. v. Leeuwen, K. Mehlhorn, and K. M. Borgwardt. Weisfeiler-lehman graph kernels. Journal of Machine Learning Research , 12:2539–2561, 2011.
  • [33] S. Siegal. Nonparametric statistics for the behavioral sciences . McGraw-hill, 1956.
  • [34] A. Subramanian, P. Tamayo, V. K. Mootha, S. Mukherjee, B. L. Ebert, M. A. Gillette, A. Paulovich, S. L. Pomeroy, T. R. Golub, E. S. Lander, et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proceedings of the National Academy of Sciences , 102(43):15545–15550, 2005.
  • [35] J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei. Line: Large-scale information network embedding. In WWW , 2015.
  • [36] D. Wang, P. Cui, and W. Zhu. Structural deep network embedding. In KDD , 2016.
  • [37] X. Wang, P. Cui, J. Wang, J. Pei, W. Zhu, and S. Yang. Community preserving network embedding. In AAAI , 2017.
  • [38] D. J. Watts and S. H. Strogatz. Collective dynamics of ‘small-world’ networks. Nature , 393(6684):440–442, 1998.
  • [39] L. Xu, X. Wei, J. Cao, and P. S. Yu. Embedding identity and interest for social networks. In WWW , 2017.
  • [40] Z. Yang, W. Cohen, and R. Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. In ICML , 2016.
  • [41] M. Zitnik and J. Leskovec. Predicting multicellular function through multi-layer tissue networks. Bioinformatics , 33(14):190–198, 2017.

Appendix A Minibatch pseudocode

In order to use stochastic gradient descent, we adapt our algorithm to allow forward and backward propagation for minibatches of nodes and edges. Here we focus on the minibatch forward propagation algorithm, analogous to Algorithm 1 . In the forward propagation of GraphSAGE  the minibatch ℬ ℬ \mathcal{B} contains nodes that we want to generate representations for. Algorithm 2 gives the pseudocode for the minibatch approach.

𝑘 1 (k+1) -st iteration, or “layer”, of Algorithm 1. Lines 9-15 correspond to the aggregation stage, which is almost identical to the batch inference algorithm. Note that in Lines 12 and 13, the representation at iteration k 𝑘 k of any node in set ℬ k superscript ℬ 𝑘 \mathcal{B}^{k} can be computed, because its representation at iteration k − 1 𝑘 1 k-1 and the representations of its sampled neighbors at iteration k − 1 𝑘 1 k-1 have already been computed in the previous loop. The algorithm thus avoids computing the representations for nodes that are not in the current minibatch and not used during the current iteration of stochastic gradient descent. We use the notation 𝒩 k ​ ( u ) subscript 𝒩 𝑘 𝑢 \mathcal{N}_{k}(u) to denote a deterministic function which specifies a random sample of a node’s neighborhood (i.e., the randomness is assumed to be pre-computed in the mappings). We index this function by k 𝑘 k to denote the fact that the random samples are independent across iterations over k 𝑘 k . We use a uniform sampling function in this work and sample with replacement in cases where the sample size is larger than the node’s degree.

Note that the sampling process in Algorithm 2 is conceptually reversed compared to the iterations over k 𝑘 k in Algorithm 1 : we start with the “layer-K” nodes (i.e., the nodes in ℬ ℬ \mathcal{B} ) that we want to generate representations for; then we sample their neighbors (i.e., the nodes at “layer-K-1” of the algorithm) and so on. One consequence of this is that the definition of neighborhood sampling sizes can be somewhat counterintuitive. In particular, if we use K = 2 𝐾 2 K=2 total iterations with sample sizes S 1 subscript 𝑆 1 S_{1} and S 2 subscript 𝑆 2 S_{2} then this means that we sample S 1 subscript 𝑆 1 S_{1} nodes during iteration k = 1 𝑘 1 k=1 of Algorithm 1 and S 2 subscript 𝑆 2 S_{2} nodes during iteration k = 2 𝑘 2 k=2 , and—from the perspective of the “target” nodes in ℬ ℬ \mathcal{B} that we want to generate representations for after iteration k = 2 𝑘 2 k=2 —this amounts to sampling S 2 subscript 𝑆 2 S_{2} of their immediate neighbors and S 1 ⋅ S 2 ⋅ subscript 𝑆 1 subscript 𝑆 2 S_{1}\cdot S_{2} of their 2-hop neighbors.

Appendix B Additional Dataset Details

In this section, we provide some additional, relevant dataset details. The full PPI and Reddit datasets are available at: http://snap.stanford.edu/graphsage/ . The Web of Science dataset (WoS) is licensed by Thomson Reuters and can be made available to groups with valid WoS licenses.

Reddit data

To sample communities, we ranked communities by their total number of comments in 2014 and selected the communities with ranks [11,50] (inclusive). We omitted the largest communities because they are large, generic default communities that substantially skew the class distribution. We selected the largest connected component of the graph defined over the union of these communities. We performed early validation experiments and model development on data from October and November, 2014.

Details on the source of the Reddit data are at: https://archive.org/details/FullRedditSubmissionCorpus2006ThruAugust2015 and https://archive.org/details/2015_reddit_comments_corpus .

We selected the following subfields manually, based on them being of relatively equal size and all biology-related fields. We performed early validation and model development on the neuroscience subfield (code=RU, which is excluded from our final set). We did not run any experiments on any other subsets of the WoS data. We took the largest connected component of the graph defined over the union of these fields.

Immunology (code: NI, number of documents: 77356)

Ecology (code: GU, number of documents: 37935)

Biophysics (code: DA, number of documents: 36688)

Endocrinology and Metabolism (code: IA, number of documents: 52225).

Cell Biology (code: DR, number of documents: 84231)

Biology (other) (code: CU, number of documents: 13988)

PPI Tissue Data

For training, we randomly selected 20 PPI networks that had at least 15,000 edges. For testing and validation, we selected 4 large networks (2 for validation, 2 for testing, each with at least 35,000 edges). All experiments for model design and development were performed on the same 2 validation networks, and we used the same random training set in all experiments.

We selected features that included at least 10% of the proteins that appear in any of the PPI graphs. Note that the feature data is very sparse for dataset ( 42 % percent 42 42\% of nodes have no non-zero feature values), which makes leveraging neighborhood information critical.

Appendix C Details on the Experimental Setup and Hyperparameter Tuning

Random walks for the unsupervised objective.

For all settings, we ran 50 random walks of length 5 from each node in order to obtain the pairs needed for the unsupervised loss (Equation 1 ). Our implementation of the random walks is in pure Python and is based directly on Python code provided by Perozzi et al.  [ 28 ] .

Logistic regression model

For the feature only model and to make predictions on the embeddings output from the unsupervised models, we used the logistic SGDClassifier from the scikit-learn Python package [ 26 ] , with all default settings. Note that this model is always optimized only on the training nodes and it is not fine-tuned on the embeddings that are generated for the test data.

Hyperparameter selection

In all settings, we performed hyperparameter selection on the learning rate and the model dimension. With the exception of DeepWalk, we performed a parameter sweep on initial learning rates { 0.01 , 0.001 , 0.0001 } 0.01 0.001 0.0001 \{0.01,0.001,0.0001\} for the supervised models and { 2 × 10 − 6 , 2 × 10 − 7 , 2 × 10 − 8 } 2 superscript 10 6 2 superscript 10 7 2 superscript 10 8 \{2\times 10^{-6},2\times 10^{-7},2\times 10^{-8}\} for the unsupervised models. 7 7 7 Note that these values differ from our previous reported pre-print values because they are corrected to account for an extraneous normalization by the batch size. We thank Ben Johnson for pointing out this discrepancy. When applicable, we tested a “big” and “small” version of each model, where we tried to keep the overall model sizes comparable. For the pooling aggregator, the “big” model had a pooling dimension of 1024, while the “small” model had a dimension of 512. For the LSTM aggregator, the “big” model had a hidden dimension of 256, while the “small” model had a hidden dimension of 128; note that the actual parameter count for the LSTM is roughly 4 × 4{\times} this number, due to weights for the different gates. In all experiments and for all models we specify the output dimension of the 𝐡 i k subscript superscript 𝐡 𝑘 𝑖 \mathbf{h}^{k}_{i} vectors at every depth k 𝑘 k of the recursion to be 256 256 256 . All models use rectified linear units as a non-linear activation function. All the unsupervised GraphSAGE models and DeepWalk used 20 negative samples with context distribution smoothing over node degrees using a smoothing parameter of 0.75 0.75 0.75 , following [ 11 , 22 , 28 ] . Initial experiments revealed that DeepWalk performed much better with large learning rates, so we swept over rates in the set { 0.2 , 0.4 , 0.8 } 0.2 0.4 0.8 \{0.2,0.4,0.8\} . For the supervised GraphSAGE methods, we ran 10 epochs for all models. All methods except DeepWalk use batch sizes of 512. We found that DeepWalk achieved faster wall-clock convergence with a smaller batch size of 64.

Except for DeepWalk, we ran experiments single a machine with 4 NVIDIA Titan X Pascal GPUs (12Gb of RAM at 10Gbps speed), 16 Intel Xeon CPUs (E5-2623 v4 @ 2.60GHz), and 256Gb of RAM. DeepWalk was faster on a CPU intensive machine with 144 Intel Xeon CPUs (E7-8890 v3 @ 2.50GHz) and 2Tb of RAM. Overall, our experiments took about 3 days in a shared resource setting. We expect that a consumer-grade single-GPU machine (e.g., with a Titan X GPU) could complete our full set of experiments in 4-7 days, if its full resources were dedicated.

Notes on the DeepWalk implementation

Existing DeepWalk implementations [ 28 , 11 ] are simply wrappers around dedicated word2vec code, and they do not easily support embedding new nodes and other variations. Moreover, this makes it difficult to compare runtimes and other statistics for these approaches. For this reason, we reimplemented DeepWalk in pure TensorFlow, using the vector initializations etc that are described in the TensorFlow word2vec tutorial. 8 8 8 https://github.com/tensorflow/models/blob/master/tutorials/embedding/word2vec.py

We found that DeepWalk was much slower to converge than the other methods, and since it is 2-5X faster at training, we gave it 5 passes over the random walk data, instead of one. To update the DeepWalk method on new data, we ran 50 random walks of length 5 (as described above) and performed updates on the embeddings for the new nodes while holding the already trained embeddings fixed. We also tested two variants, one where we restricted the sampled random walk “context nodes” to only be from the set of already trained nodes (which alleviates statistical drift) and an approach without this restriction. We always selected the better performing variant. Note that despite DeepWalk’s poor performance on the inductive task, it is far more competitive when tested in the transductive setting, where it can be extensively trained on a single, fixed graph. (That said, Kipf et al [ 17 ] [ 18 ] found that GCN-based approach consistently outperformed DeepWalk, even in the transductive setting on link prediction, a task that theoretically favors DeepWalk.) We did observe DeepWalk’s performance could improve with further training, and in some cases it could become competitive with the unsupervised GraphSAGE approaches (but not the supervised approaches) if we let it run for > 1000 × {>}1000{\times} longer than the other approaches (in terms of wall clock time for prediction on the test set); however, we did not deem this to be a meaningful comparison for the inductive task.

Note that DeepWalk is also equivalent to the node2vec model [ 11 ] with p = q = 1 𝑝 𝑞 1 p=q=1 .

Notes on neighborhood sampling

Due to the heavy-tailed nature of degree distributions we downsample the edges in all graphs before feeding them into the GraphSAGE algorithm. In particular, we subsample edges so that no node has degree larger than 128 128 128 . Since we only sample at most 25 25 25 neighbors per node, this is a reasonable tradeoff. This downsampling allows us to store neighborhood information as dense adjacency lists, which drastically improves computational efficiency. For the Reddit data we also downsampled the edges of the original graph as a pre-processing step, since the original graph is extremely dense. All experiments are on the downsampled version, but we release the full version on the project website for reference.

Appendix D Alignment Issues and Orthogonal Invariance for DeepWalk and Related Approaches

DeepWalk [ 28 ] , node2vec [ 11 ] , and other recent successful node embedding approaches employ objective functions of the form:

(4)

By connection to word embedding approaches and the arguments of [ 20 ] , these approaches can also be viewed as stochastic, implicit matrix factorizations where we are trying to learn a matrix 𝐙 ∈ ℝ | 𝒱 | × d 𝐙 superscript ℝ 𝒱 𝑑 \mathbf{Z}\in\mathbb{R}^{|\mathcal{V}|\times d} such that

(5)

where 𝐌 𝐌 \mathbf{M} is some matrix containing random walk statistics.

An important consequence of this structure is that the embeddings can be rotated by an arbitrary orthogonal matrix, without impacting the objective:

(6)

where 𝐐 ∈ ℝ d × d 𝐐 superscript ℝ 𝑑 𝑑 \mathbf{Q}\in\mathbb{R}^{d\times d} is any orthogonal matrix. Since the embeddings are otherwise unconstrained and the only error signal comes from the orthogonally-invariant objective ( 4 ), the entire embedding space is free to arbitrarily rotate during training.

Two clear consequences of this are:

Suppose we run an embedding approach based on ( 4 ) on two separate graphs A and B using the same output dimension. Without some explicit penalty enforcing alignment, the learned embeddings spaces for the two graphs will be arbitrarily rotated with respect to each other after training. Thus, for any node classification method that is trained on individual embeddings from graph A, inputting the embeddings from graph B will be essentially random. This fact is also simply true by virtue of the fact that the 𝐌 𝐌 \mathbf{M} matrices of these graphs are completely disjoint. Of course, if we had a way to match “similar” nodes between the graphs, then it could be possible to use an alignment procedure to share information between the graphs, such as the procedure proposed by [ 12 ] for aligning the output of word embedding algorithms. Investigating such alignment procedures is an interesting direction for future work; though these approaches will inevitably be slow run on new data, compared to approaches like GraphSAGE that can simply generate embeddings for new nodes without any additional training or alignment.

𝑡 1 t+1 we add more nodes to C and run a new round of SGD and update all embeddings. Two issues arise: First by analogy to point 1 above, if the new nodes are only connected to a very small number of the old nodes, then the embedding space for the new nodes can essentially become rotated with respect to the original embedding space. Moreover, if we update all embeddings during training (not just for the new nodes), as suggested by [ 28 ] ’s streaming approach to DeepWalk, then the embedding space can arbitrarily rotate compared to the embedding space that we trained our classifier on, which only further exasperates the problem.

Note that this rotational invariance is not problematic for tasks that only rely on pairwise node distances (e.g., link prediction via dot products). Moreover, some reasonable approaches to alleviate this issue of statistical drift are to (1) not update the already trained embeddings when optimizing the embeddings for new test nodes and (2) to only keep existing nodes as “context nodes” in the sampled random walks, i.e. to ensure that every dot-product in the skip-gram objective is the product of an already-trained node and a new/test node. We tried both of these approaches in this work and always selected the best performing DeepWalk variant.

Also note that empirically DeepWalk performs better on the citation data than the Reddit data (Section 4.1 ) because this statistical drift is worse in the Reddit data, compared to the citation graph. In particular, the Reddit data has fewer edges from the test set to the train set, which help prevent mis-alignment: 96% of the 2005 citation links connect back to the 2000-2004 data, while only 73% of edges in the Reddit test set connect back to the train data.

Appendix E Proof of Theorem 1

To prove Theorem 1, we first prove three lemmas:

Lemma 1 states that there exists a continuous function that is guaranteed to only be positive in closed balls around a fixed number of points, with some noise tolerance.

Lemma 2 notes that we can approximate the function in Lemma 1 to an arbitrary precision using a multilayer perceptron with a single hidden layer.

Lemma 3 builds off the preceding two lemmas to prove that the pooling architecture can learn to map nodes to unique indicator vectors, assuming that all the input feature vectors are sufficiently distinct.

We also rely on fact that the max-pooling operator (with at least one hidden layer) is capable of approximating any Hausdorff continuous, symmetric function to an arbitrary ϵ italic-ϵ \epsilon precision [ 29 ] .

We note that all of the following are essentially identifiability arguments. We show that there exists a parameter setting for which Algorithm 1 can learn nodes clustering coefficients, which is non-obvious given that it operates by aggregating feature information. The efficient learnability of the functions described is the subject of future work. We also note that these proofs are conservative in the sense that clustering coefficients may be in fact identifiable in fewer iterations, or with less restrictions, than we impose. Moreover, due to our reliance on two universal approximation theorems [ 15 , 29 ] , the required dimensionality is in principle O ​ ( | 𝒱 | ) 𝑂 𝒱 O(|\mathcal{V}|) . We can provide a more informative bound on the required output dimension of some particular layers (e..g., Lemma 3); however, in the worst case this identifiability argument relies on having a dimension of O ​ ( | 𝒱 | ) 𝑂 𝒱 O(|\mathcal{V}|) . It is worth noting, however, that Kipf et al’s “featureless” GCN approach has parameter dimension O ​ ( | 𝒱 | ) 𝑂 𝒱 O(|\mathcal{V}|) , so this requirement is not entirely unreasonable [ 17 , 18 ] .

Following Theorem 1, we let 𝐱 v ∈ U , ∀ v ∈ 𝒱 formulae-sequence subscript 𝐱 𝑣 𝑈 for-all 𝑣 𝒱 \mathbf{x}_{v}\in U,\forall v\in\mathcal{V} denote the feature inputs for Algorithm 1 on graph 𝒢 = ( 𝒱 , ℰ ) 𝒢 𝒱 ℰ \mathcal{G}=(\mathcal{V},\mathcal{E}) , where U 𝑈 U is any compact subset of ℝ d superscript ℝ 𝑑 \mathbb{R}^{d} .

C\in\mathbb{R}^{+} be a fixed positive constant. Then for any non-empty finite subset of nodes 𝒟 ⊆ 𝒱 𝒟 𝒱 \mathcal{D}\subseteq\mathcal{V} , there exists a continuous function g : U → ℝ : 𝑔 → 𝑈 ℝ g:U\rightarrow\mathbb{R} such that

(7)

where ϵ < 0.5 italic-ϵ 0.5 \epsilon<0.5 is a chosen error tolerance.

Many such functions exist. For concreteness, we provide one construction that satisfies these criteria. Let 𝐱 ∈ U 𝐱 𝑈 \mathbf{x}\in U denote an arbitrary input to g 𝑔 g , let d v = ‖ 𝐱 − 𝐱 v ‖ 2 , ∀ v ∈ 𝒟 formulae-sequence subscript 𝑑 𝑣 subscript norm 𝐱 subscript 𝐱 𝑣 2 for-all 𝑣 𝒟 d_{v}=\|\mathbf{x}-\mathbf{x}_{v}\|_{2},\forall v\in\mathcal{D} , and let g 𝑔 g be defined as g ​ ( 𝐱 ) = ∑ v ∈ 𝒟 g v ​ ( 𝐱 ) 𝑔 𝐱 subscript 𝑣 𝒟 subscript 𝑔 𝑣 𝐱 g(\mathbf{x})=\sum_{v\in\mathcal{D}}g_{v}(\mathbf{x}) with

(8)

where b = 3 ​ | 𝒟 | − 1 C 2 > 0 𝑏 3 𝒟 1 superscript 𝐶 2 0 b=\frac{3|\mathcal{D}|-1}{C^{2}}>0 . By construction:

g v subscript 𝑔 𝑣 g_{v} has a unique maximum of 3 ​ | 𝒟 ​ | ϵ − 2 ​ ϵ > ​ 2 | ​ 𝒟 | ϵ conditional 3 𝒟 ket italic-ϵ 2 italic-ϵ 2 𝒟 italic-ϵ 3|\mathcal{D}|\epsilon-2\epsilon>2|\mathcal{D}|\epsilon at d v = 0 subscript 𝑑 𝑣 0 d_{v}=0 .

𝑏 superscript subscript 𝑑 𝑣 2 1 2 italic-ϵ 2 italic-ϵ \lim_{d_{v}\rightarrow\infty}\left(\frac{3|\mathcal{D}|\epsilon}{bd_{v}^{2}+1}-2\epsilon\right)=-2\epsilon

𝑏 superscript subscript 𝑑 𝑣 2 1 2 italic-ϵ italic-ϵ \frac{3|\mathcal{D}|\epsilon}{bd_{v}^{2}+1}-2\epsilon\leq-\epsilon if d v ≥ C subscript 𝑑 𝑣 𝐶 d_{v}\geq C .

d_{v}\in\mathbb{R}^{+} ) since it is the sum of finite set of continuous functions. Moreover, we have that, for a given input 𝐱 ∈ U 𝐱 𝑈 \mathbf{x}\in U , if d v ≥ C subscript 𝑑 𝑣 𝐶 d_{v}\geq C for all points v ∈ 𝒟 𝑣 𝒟 v\in\mathcal{D} then g ​ ( 𝐱 ) = ∑ v ∈ 𝒟 g v ​ ( 𝐚 ) ≤ − ϵ 𝑔 𝐱 subscript 𝑣 𝒟 subscript 𝑔 𝑣 𝐚 italic-ϵ g(\mathbf{x})=\sum_{v\in\mathcal{D}}g_{v}(\mathbf{a})\leq{-}\epsilon by property 3 above. And, if d v = 0 subscript 𝑑 𝑣 0 d_{v}=0 for any v ∈ 𝒟 𝑣 𝒟 v\in\mathcal{D} , then g 𝑔 g is positive by construction, by properties 1 and 2, since in this case,

so we know that g 𝑔 g is positive whenever d v = 0 subscript 𝑑 𝑣 0 d_{v}=0 for any node and negative whenever d v > C subscript 𝑑 𝑣 𝐶 d_{v}>C for all nodes. ∎

This is a direct consequence of Theorem 2 in [ 15 ] . ∎

C\in\mathbb{R}^{+} such that ‖ 𝐱 v − 𝐱 v ′ ‖ 2 > C subscript norm subscript 𝐱 𝑣 subscript 𝐱 superscript 𝑣 ′ 2 𝐶 \|\mathbf{x}_{v}-\mathbf{x}_{v^{\prime}}\|_{2}>C for all pairs of nodes. Then we have that there exists a parameter setting for Algorithm 1 , using a pooling aggregator at depth k = 1 𝑘 1 k=1 , where this pooling aggregator has ≥ 2 absent 2 \geq 2 hidden layers with rectified non-linear units, such that

where ℰ I χ ​ ( 𝒢 4 ) subscript superscript ℰ 𝜒 superscript 𝒢 4 𝐼 \mathcal{E}^{\chi(\mathcal{G}^{4})}_{I} is the set of one-hot indicator vectors of dimension χ ​ ( 𝒢 4 ) 𝜒 superscript 𝒢 4 \mathcal{\chi}(\mathcal{G}^{4}) .

By the definition of the chromatic number, we know that we can label every node in 𝒱 𝒱 \mathcal{V} using χ ​ ( 𝒢 4 ) 𝜒 superscript 𝒢 4 \chi(\mathcal{G}^{4}) unique colors, such that no two nodes that co-occur in any node’s 2-hop neighborhood are assigned the same color. Thus, with exactly χ ​ ( 𝒢 4 ) 𝜒 superscript 𝒢 4 \chi(\mathcal{G}^{4}) dimensions we can assign a unique one-hot indicator vector to every node, where no two nodes that co-occur in any 2-hop neighborhood have the same vector. In other words, each color defines a subset of nodes 𝒟 ⊆ 𝒱 𝒟 𝒱 \mathcal{D}\subseteq\mathcal{V} and this subset of nodes can all be mapped to the same indicator vector without introducing conflicts.

By Lemma 1 and 2 and the assumption that ‖ 𝐱 v − 𝐱 v ′ ‖ 2 > C subscript norm subscript 𝐱 𝑣 subscript 𝐱 superscript 𝑣 ′ 2 𝐶 \|\mathbf{x}_{v}-\mathbf{x}_{v^{\prime}}\|_{2}>C for all pairs of nodes, we can choose an ϵ < 0.5 italic-ϵ 0.5 \epsilon<0.5 and there exists a single-layer MLP, f θ σ subscript 𝑓 subscript 𝜃 𝜎 f_{\theta_{\sigma}} , such that for any subset of nodes 𝒟 ⊆ 𝒱 𝒟 𝒱 \mathcal{D}\subseteq\mathcal{V} :

(9)

By making this MLP one layer deeper and specifically using a rectified linear activation function, we can return a positive value only for nodes in the subset 𝒟 𝒟 \mathcal{D} and zero otherwise, and, since we normalize after applying the aggregator layer, this single positive value can be mapped to an indicator vector. Moreover, we can create χ ​ ( 𝒢 4 ) 𝜒 superscript 𝒢 4 \chi(\mathcal{G}^{4}) such MLPs, where each MLP corresponds to a different color/subset; equivalently each MLP corresponds to a different max-pooling dimension in equation 3 of the main text. ∎

We now restate Theorem 1 and provide a proof.

where z v ∈ ℝ subscript 𝑧 𝑣 ℝ z_{v}\in\mathbb{R} are final output values generated by Algorithm 1 and c v subscript 𝑐 𝑣 c_{v} are node clustering coefficients, as defined in [ 38 ] .

Without loss of generality, we describe how to compute the clustering coefficient for an arbitrary node v 𝑣 v . For notational convenience we use ⊕ direct-sum \oplus to denote vector concatenation and d v subscript 𝑑 𝑣 d_{v} to denote the degree of node v 𝑣 v . This proof requires 4 iterations of Algorithm 1, where we use the pooling aggregator at all depths. For clarity and we ignore issues related to vector normalization and we use the fact that the pooling aggregator can approximate any Hausdorff continuous function to an arbitrary ϵ italic-ϵ \epsilon precision [ 29 ] . Note that we can always account for normalization constants (line 7 in Algorithm 1) by having aggregators prepend a unit value to all output representations; the normalization constant can then be recovered at later layers by taking the inverse of this prepended value. Note also that almost certainly exist settings where the symmetric functions described below can be computed exactly by the pooling aggregator (or a variant of it), but the symmetric universal approximation theorem of [ 29 ] along with Lipschitz continuity arguments suffice for the purposes of proving identifiability of clustering coefficients (up to an arbitrary precision). In particular, the functions described below, that we need approximate to compute clustering coefficients, are all Lipschitz continuous on their domains (assuming we only run on nodes with positive degrees) so the errors introduced by approximation remain bounded by fixed constants (that can be made arbitrarily small).

By Lemma 3, we can assume that at depth k = 1 𝑘 1 k=1 all nodes in v 𝑣 v ’s 2-hop neighborhood have unique, one-hot indicator vectors, 𝐡 v 1 ∈ ℰ I subscript superscript 𝐡 1 𝑣 subscript ℰ 𝐼 \mathbf{h}^{1}_{v}\in\mathcal{E}_{I} . Thus, at depth k = 2 𝑘 2 k=2 in Algorithm 1, suppose that we sum the unnormalized representations of the neighboring nodes. Then without loss of generality, we will have that 𝐡 v 2 = 𝐡 v 1 ⊕ 𝐀 v subscript superscript 𝐡 2 𝑣 direct-sum subscript superscript 𝐡 1 𝑣 subscript 𝐀 𝑣 \mathbf{h}^{2}_{v}=\mathbf{h}^{1}_{v}\oplus\mathbf{A}_{v} where 𝐀 𝐀 \mathbf{A} is the adjacency matrix of the subgraph containing all nodes connected to v 𝑣 v in G 4 superscript 𝐺 4 G^{4} and 𝐀 v subscript 𝐀 𝑣 \mathbf{A}_{v} is the row of the adjacency matrix corresponding to v 𝑣 v . Then, at depth k = 3 𝑘 3 k=3 , again assume that we sum the neighboring representations (with the weight matrices as the identity), then we will have that

(10)

Letting m 𝑚 m denote the dimensionality of the 𝐡 v 1 subscript superscript 𝐡 1 𝑣 \mathbf{h}^{1}_{v} vectors (i.e., m ≡ χ ​ ( G 4 ) 𝑚 𝜒 superscript 𝐺 4 m\equiv\chi(G^{4}) from Lemma 3) and using square brackets to denote vector indexing, we can observe that

𝐚 ≡ 𝐡 v 3 [ 0 : m ] \mathbf{a}\equiv\mathbf{h}^{3}_{v}[0:m] is v 𝑣 v ’s one-hot indicator vector.

𝐛 ≡ 𝐡 v 3 [ m : 2 m ] \mathbf{b}\equiv\mathbf{h}^{3}_{v}[m:2m] is v 𝑣 v ’s row in the adjacency matrix, 𝐀 𝐀 \mathbf{A} .

𝐜 ≡ 𝐡 v 3 [ 3 m : 4 m ] \mathbf{c}\equiv\mathbf{h}^{3}_{v}[3m:4m] is the sum of the adjacency rows of v 𝑣 v ’s neighbors.

Thus, we have that 𝐛 ⊤ ​ 𝐜 superscript 𝐛 top 𝐜 \mathbf{b}^{\top}\mathbf{c} is the number of edges in the subgraph containing only v 𝑣 v and it’s immediate neighbors and ∑ i = 0 m 𝐛 ​ [ i ] = d v superscript subscript 𝑖 0 𝑚 𝐛 delimited-[] 𝑖 subscript 𝑑 𝑣 \sum_{i=0}^{m}\mathbf{b}[i]=d_{v} . Finally we can compute

(11)
(12)

and since this is a continuous function of 𝐡 v 3 subscript superscript 𝐡 3 𝑣 \mathbf{h}^{3}_{v} , we can approximate it to an arbitrary ϵ italic-ϵ \epsilon precision with a single-layer MLP (or equivalently, one more iteration of Algorithm 1, ignoring neighborhood information). Again this last step follows directly from [ 15 ] . ∎

Corollary 2 .

Suppose we sample nodes features from any probability distribution μ 𝜇 \mu over 𝐱 ∈ U 𝐱 𝑈 \mathbf{x}\in U , where μ 𝜇 \mu is absolutely continuous with respect to the Lebesgue measure. Then the conditions of Theorem 1 are almost surely satisfied with feature inputs 𝐱 v ∼ μ similar-to subscript 𝐱 𝑣 𝜇 \mathbf{x}_{v}\sim\mu .

Corollary 2 is a direct consequence of Theorem 1 and the fact that, for any probability distribution that is absolutely continuous w.r.t. the Lebesgue measure, the probability of sampling two identical points is zero. Empirically, we found that GraphSAGE-pool was in fact capable of maintaining modest performance by leveraging graph structure, even with completely random feature inputs (see Figure 3 ). However, the performance GraphSAGE-GCN was not so robust, which makes intuitive sense given that the Lemmas 1, 2, and 3 rely directly on the universal expressive capability of the pooling aggregator.

Refer to caption

Finally, we note that Theorem 1 and Corollary 2 are expressed with respect to a particular given graph and are thus somewhat transductive. For the inductive setting, we can state

Corollary 3 .

𝑘 4 K=k+4 iterations of Algorithm 1 .

Corollary 3 simply states that if after k 𝑘 k iterations of Algorithm 1, we can learn to uniquely identify nodes for a class of graphs, then we can also approximate clustering coefficients to an arbitrary precision for this class of graphs.

ar5iv homepage

Inductive Representation Learning on Large Graphs

inductive representation learning on large graph

Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.

· Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Datasets citing this paper 0

No dataset linking this paper

Spaces citing this paper 0

No Space linking this paper

Collections including this paper 1

Inductive Representation Learning on Large Graphs

  • Hamilton, William L.
  • Leskovec, Jure

Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.

  • Computer Science - Social and Information Networks;
  • Computer Science - Machine Learning;
  • Statistics - Machine Learning
Paper ID:671
Title:

inductive representation learning on large graph

  • SNAP for C++
  • SNAP for Python
  • Large networks
  • Web datasets
  • Other resources

GraphSAGE: Inductive Representation Learning on Large Graphs

Contributors.

inductive representation learning on large graph

Inductive Representation Learning on Large Graphs

inductive representation learning on large graph

Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node's local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.

Please sign up or login with your details

Generation Overview

AI Generator calls

AI Video Generator calls

AI Chat messages

Genius Mode messages

Genius Mode images

AD-free experience

Private images

  • Includes 500 AI Image generations, 1750 AI Chat Messages, 30 AI Video generations, 60 Genius Mode Messages and 60 Genius Mode Images per month. If you go over any of these limits, you will be charged an extra $5 for that group.
  • For example: if you go over 500 AI images, but stay within the limits for AI Chat and Genius Mode, you'll be charged $5 per additional 500 AI Image generations.
  • Includes 100 AI Image generations and 300 AI Chat Messages. If you go over any of these limits, you will have to pay as you go.
  • For example: if you go over 100 AI images, but stay within the limits for AI Chat, you'll have to reload on credits to generate more images. Choose from $5 - $1000. You'll only pay for what you use.

Out of credits

Refill your membership to continue using DeepAI

Share your generations with friends

Subscribe to the PwC Newsletter

Join the community, edit social preview.

inductive representation learning on large graph

Add a new code entry for this paper

Remove a code repository from this paper, mark the official implementation from paper authors, add a new evaluation result row.

TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK REMOVE
  • ANOMALY DETECTION
  • LINK PREDICTION
  • REPRESENTATION LEARNING

Remove a task

inductive representation learning on large graph

Add a method

Remove a method, edit datasets, inductive representation learning in large attributed graphs.

25 Oct 2017  ·  Nesreen K. Ahmed , Ryan A. Rossi , Rong Zhou , John Boaz Lee , Xiangnan Kong , Theodore L. Willke , Hoda Eldardiry · Edit social preview

Graphs (networks) are ubiquitous and allow us to model entities (nodes) and the dependencies (edges) between them. Learning a useful feature representation from graph data lies at the heart and success of many machine learning tasks such as classification, anomaly detection, link prediction, among many others. Many existing techniques use random walks as a basis for learning features or estimating the parameters of a graph model for a downstream prediction task. Examples include recent node embedding methods such as DeepWalk, node2vec, as well as graph-based deep learning algorithms. However, the simple random walk used by these methods is fundamentally tied to the identity of the node. This has three main disadvantages. First, these approaches are inherently transductive and do not generalize to unseen nodes and other graphs. Second, they are not space-efficient as a feature vector is learned for each node which is impractical for large graphs. Third, most of these approaches lack support for attributed graphs. To make these methods more generally applicable, we propose a framework for inductive network representation learning based on the notion of attributed random walk that is not tied to node identity and is instead based on learning a function $\Phi : \mathrm{\rm \bf x} \rightarrow w$ that maps a node attribute vector $\mathrm{\rm \bf x}$ to a type $w$. This framework serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many other previous methods that leverage traditional random walks.

Code Edit Add Remove Mark official

Tasks edit add remove, datasets edit, results from the paper edit, methods edit add remove.

Stop Thinking, Just Do!

Sungsoo kim's blog.

Tags Categories Archive

Inductive Representation Learning on Large Graphs

  • machine learning 1313
  • data management 287

18 April 2021

Article source.

  • In this video, I do a deep dive into the Graph SAGE paper!

The first paper that started pushing the usage of GNNs for super large graphs.

  • All the nitty-gritty details behind Graph SAGE
  • Graph SAGE paper
  • Chris Olah on LSTMs

Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not naturally generalize to unseen nodes. Here we present GraphSAGE, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data. Instead of training individual embeddings for each node, we learn a function that generates embeddings by sampling and aggregating features from a node’s local neighborhood. Our algorithm outperforms strong baselines on three inductive node-classification benchmarks: we classify the category of unseen nodes in evolving information graphs based on citation and Reddit post data, and we show that our algorithm generalizes to completely unseen graphs using a multi-graph dataset of protein-protein interactions.

Sungsoo Kim Principal Research Scientist [email protected]

about me sungsoo's scoop sungsoo's facebook

Navigation Menu

Search code, repositories, users, issues, pull requests..., provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications You must be signed in to change notification settings

Representation learning on large graphs using stochastic graph convolutions.

williamleif/GraphSAGE

Folders and files.

NameName
59 Commits

Repository files navigation

Graphsage: representation learning on large graphs, authors: william l. hamilton ( [email protected] ), rex ying ( [email protected] ), project website, alternative reference pytorch implementation.

This directory contains code necessary to run the GraphSage algorithm. GraphSage can be viewed as a stochastic generalization of graph convolutions, and it is especially useful for massive, dynamic graphs that contain rich feature information. See our paper for details on the algorithm.

Note: GraphSage now also has better support for training on smaller, static graphs and graphs that don't have node features. The original algorithm and paper are focused on the task of inductive generalization (i.e., generating embeddings for nodes that were not present during training), but many benchmarks/tasks use simple static graphs that do not necessarily have features. To support this use case, GraphSage now includes optional "identity features" that can be used with or without other node attributes. Including identity features will increase the runtime, but also potentially increase performance (at the usual risk of overfitting). See the section on "Running the code" below.

Note: GraphSage is intended for use on large graphs (>100,000) nodes. The overhead of subsampling will start to outweigh its benefits on smaller graphs.

The example_data subdirectory contains a small example of the protein-protein interaction data, which includes 3 training graphs + one validation graph and one test graph. The full Reddit and PPI datasets (described in the paper) are available on the project website .

If you make use of this code or the GraphSage algorithm in your work, please cite the following paper:

Requirements

Recent versions of TensorFlow, numpy, scipy, sklearn, and networkx are required (but networkx must be <=1.11). You can install all the required packages using the following command:

To guarantee that you have the right package versions, you can use docker to easily set up a virtual environment. See the Docker subsection below for more info.

If you do not have docker installed, you will need to do so. (Just click on the preceding link, the installation is pretty painless).

You can run GraphSage inside a docker image. After cloning the project, build and run the image as following:

or start a Jupyter Notebook instead of bash:

You can also run the GPU image using nvidia-docker :

Running the code

The example_unsupervised.sh and example_supervised.sh files contain example usages of the code, which use the unsupervised and supervised variants of GraphSage, respectively.

If your benchmark/task does not require generalizing to unseen data, we recommend you try setting the "--identity_dim" flag to a value in the range [64,256]. This flag will make the model embed unique node ids as attributes, which will increase the runtime and number of parameters but also potentially increase the performance. Note that you should set this flag and not try to pass dense one-hot vectors as features (due to sparsity). The "dimension" of identity features specifies how many parameters there are per node in the sparse identity-feature lookup table.

Note that example_unsupervised.sh sets a very small max iteration number, which can be increased to improve performance. We generally found that performance continued to improve even after the loss was very near convergence (i.e., even when the loss was decreasing at a very slow rate).

Note: For the PPI data, and any other multi-ouput dataset that allows individual nodes to belong to multiple classes, it is necessary to set the --sigmoid flag during supervised training. By default the model assumes that the dataset is in the "one-hot" categorical setting.

Input format

As input, at minimum the code requires that a --train_prefix option is specified which specifies the following data files:

  • <train_prefix>-G.json -- A networkx-specified json file describing the input graph. Nodes have 'val' and 'test' attributes specifying if they are a part of the validation and test sets, respectively.
  • <train_prefix>-id_map.json -- A json-stored dictionary mapping the graph node ids to consecutive integers.
  • <train_prefix>-class_map.json -- A json-stored dictionary mapping the graph node ids to classes.
  • <train_prefix>-feats.npy [optional] --- A numpy-stored array of node features; ordering given by id_map.json. Can be omitted and only identity features will be used.
  • <train_prefix>-walks.txt [optional] --- A text file specifying random walk co-occurrences (one pair per line) (*only for unsupervised version of graphsage)

To run the model on a new dataset, you need to make data files in the format described above. To run random walks for the unsupervised model and to generate the -walks.txt file) you can use the run_walks function in graphsage.utils .

Model variants

The user must also specify a --model, the variants of which are described in detail in the paper:

  • graphsage_mean -- GraphSage with mean-based aggregator
  • graphsage_seq -- GraphSage with LSTM-based aggregator
  • graphsage_maxpool -- GraphSage with max-pooling aggregator (as described in the NIPS 2017 paper)
  • graphsage_meanpool -- GraphSage with mean-pooling aggregator (a variant of the pooling aggregator, where the element-wie mean replaces the element-wise max).
  • gcn -- GraphSage with GCN-based aggregator
  • n2v -- an implementation of DeepWalk (called n2v for short in the code.)

Logging directory

Finally, a --base_log_dir should be specified (it defaults to the current directory). The output of the model and log files will be stored in a subdirectory of the base_log_dir. The path to the logged data will be of the form <sup/unsup>-<data_prefix>/graphsage-<model_description>/ . The supervised model will output F1 scores, while the unsupervised model will train embeddings and store them. The unsupervised embeddings will be stored in a numpy formated file named val.npy with val.txt specifying the order of embeddings as a per-line list of node ids. Note that the full log outputs and stored embeddings can be 5-10Gb in size (on the full data when running with the unsupervised variant).

Using the output of the unsupervised models

The unsupervised variants of GraphSage will output embeddings to the logging directory as described above. These embeddings can then be used in downstream machine learning applications. The eval_scripts directory contains examples of feeding the embeddings into simple logistic classifiers.

Acknowledgements

The original version of this code base was originally forked from https://github.com/tkipf/gcn/ , and we owe many thanks to Thomas Kipf for making his code available. We also thank Yuanfang Li and Xin Li who contributed to a course project that was based on this work. Please see the paper for funding details and additional (non-code related) acknowledgements.

Contributors 6

  • Python 99.7%

IMAGES

  1. GraphSAGE: Inductive Representation Learning on Large Graphs (Graph ML Research Paper Walkthrough)

    inductive representation learning on large graph

  2. (PDF) Explanation of Inductive Representation Learning on Large Graphs

    inductive representation learning on large graph

  3. Inductive Graph Representation Learning on Large Graphs (NIPS-07) presented by Lee Ween Jian

    inductive representation learning on large graph

  4. (PDF) Inductive Representation Learning on Large Graphs

    inductive representation learning on large graph

  5. Inductive Representation Learning on Large Graphs William L

    inductive representation learning on large graph

  6. Inductive Representation Learning on Large Graphs

    inductive representation learning on large graph

VIDEO

  1. Inductive Graph Alignment Prompt Bridging the Gap between Graph Pre training and Inductive Fine tu

  2. GraphSAGE

  3. [rfp1824] Inductive Graph Alignment Prompt: Bridging the Gap between Graph Pre-training and Inductiv

  4. Hypothesis spaces, Inductive bias, Generalization, Bias variance trade-off in tamil -AL3451 #ML

  5. [논문 리뷰] Graph Neural Networks (GCN, GraphSAGE, GAT)

  6. Text-to-GRAPH w/ LGGM: Generative Graph Models

COMMENTS

  1. [1706.02216] Inductive Representation Learning on Large Graphs

    Inductive Representation Learning on Large Graphs. Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings ...

  2. Inductive representation learning on large graphs

    Inductive representation learning on large graphs. Pages 1025 - 1035. PREVIOUS ARTICLE. Differentiable learning of submodular models. Previous. ... W. Lu, and Q. Xu. Grarep: Learning graph representations with global structural information. In KDD, 2015. Digital Library. Google Scholar [6] J. Chen and J. Zhu. Stochastic training of graph ...

  3. PDF Inductive Representation Learning on Large Graphs

    learning over graph structures [7, 10, 21, 31]. Our approach is conceptually inspired by a number of these algorithms. However, whereas these previous approaches attempt to classify entire graphs (or subgraphs), the focus of this work is generating useful representations for individual nodes. Graph convolutional networks.

  4. Inductive Representation Learning on Large Graphs

    GraphSAGE is a framework that learns a function to generate node embeddings by sampling and aggregating features from a node's local neighborhood. It outperforms baselines on inductive node-classification tasks on evolving and multi-graph datasets.

  5. Inductive Representation Learning on Large Graphs

    Inductive Representation Learning on Large Graphs. Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings ...

  6. Inductive Representation Learning on Large Graphs

    4.1 Inductive learning on evolving graphs: Citation and Reddit data. Our first two experiments are on classifying nodes in evolving information graphs, a task that is especially relevant to high-throughput production systems, which constantly encounter unseen data. Citation data.

  7. Inductive Representation Learning on Large Graphs

    Inductive Representation Learning on Large Graphs. GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.

  8. [1706.02216] Inductive Representation Learning on Large Graphs

    1 Introduction. Low-dimensional vector embeddings of nodes in large graphs 1 have proved extremely useful as feature inputs for a wide variety of prediction and graph analysis tasks [5, 11, 28, 35, 36]. The basic idea behind node embedding approaches is to use dimensionality reduction techniques to distill the high-dimensional information about ...

  9. Inductive Representation Learning on Large Graphs

    This inductive capability is essential for high-throughput, production machine learning. systems, which operate on evolving graphs and constantly encounter unseen nodes (e.g., posts on. Reddit ...

  10. PDF Inductive Representation Learning on Large Graphs

    2. train on one graph. generalize to an entirely new graph. Inductive node embedding. generalize to entirely unseen graphs. e.g., train on protein interaction graph from model organism A and generate embeddings on newly collected data about organism B 3. train at t=0. new node arrives at t=1.

  11. Paper page

    Inductive Representation Learning on Large Graphs. Published on Jun 7, 2017. Upvote 2. Authors: William L. Hamilton, Rex Ying, Jure Leskovec. Abstract. ... However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are inherently transductive and do not ...

  12. Inductive Representation Learning on Large Graphs

    Inductive Representation Learning on Large Graphs. Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings ...

  13. PDF Inductive Representation Learning on Large Graphs

    Inductive Representation Learning on Large Graphs ... Integrating the graph structure and node feature 18 References ... Leskovec. "Representation learning on graphs: Methods and applications." arXiv preprint arXiv:1709.05584 (2017). 19 Discussion • GraphSAGE is optimized for two or three layers, what if we want to go deeper? What are

  14. Reviews: Inductive Representation Learning on Large Graphs

    Reviewer 1. The authors introduce GraphSAGE, an inductive learning representation learning method for graph-structured data. Unlike previous transductive methods, GraphSAGE is able to generalize the representation to previously unseen nodes. The representation is learned through a recursive process that samples from a node's neighborhood and ...

  15. PDF Inductive Representation Learning On Large Graphs

    undirected citation graph that was derived from the Thomson Reuters Web of Science Core Collection. 1.Six different node labels. 2.2000-2004 data used for training and the 2005 data data is used for testing (30% for validation). Predict the subreddit of postsfrom a post-to-post graph dataset of Reddit posts made in September, 2014.

  16. GraphSAGE: Inductive Representation Learning on Large Graphs

    GraphSAGE is a framework for inductive representation learning on large graphs. GraphSAGE is used to generate low-dimensional vector representations for nodes, and is especially useful for graphs that have rich node attribute information. Motivation. Code. Datasets.

  17. Inductive Representation Learning on Large Graphs

    Inductive Representation Learning on Large Graphs. Low-dimensional embeddings of nodes in large graphs have proved extremely useful in a variety of prediction tasks, from content recommendation to identifying protein functions. However, most existing approaches require that all nodes in the graph are present during training of the embeddings ...

  18. Inductive Representation Learning in Large Attributed Graphs

    Inductive Representation Learning in Large Attributed Graphs. Graphs (networks) are ubiquitous and allow us to model entities (nodes) and the dependencies (edges) between them. Learning a useful feature representation from graph data lies at the heart and success of many machine learning tasks such as classification, anomaly detection, link ...

  19. Graph SAGE

    ️ Become The AI Epiphany Patreon ️ https://www.patreon.com/theaiepiphany In this video, I do a deep dive into the Graph SAGE paper...

  20. Inductive Representation Learning on Large Graphs

    Inductive Representation Learning on Large Graphs. In this video, I do a deep dive into the Graph SAGE paper! The first paper that started pushing the usage of GNNs for super large graphs. ... However, most existing approaches require that all nodes in the graph are present during training of the embeddings; these previous approaches are ...

  21. Inductive Representation Learning in Large Attributed Graphs

    Inductive Representation Learning in Large Attributed Graphs. Graphs (networks) are ubiquitous and allow us to model entities (nodes) and the dependencies (edges) between them. Learning a useful feature representation from graph data lies at the heart and success of many machine learning tasks such as classification, anomaly detection, link ...

  22. GraphSAGE: Inductive Representation Learning on Large Graphs (Graph ML

    #graphsage #machinelearning #graphmlIn this video, we go will through this popular GraphSAGE paper in the field of GNN and understand the inductive learning ...

  23. GraphSage: Representation Learning on Large Graphs

    This directory contains code necessary to run the GraphSage algorithm. GraphSage can be viewed as a stochastic generalization of graph convolutions, and it is especially useful for massive, dynamic graphs that contain rich feature information. See our paper for details on the algorithm. Note ...

  24. Structural graph federated learning: Exploiting high-dimensional

    1.Introduction. Federated learning (FL) can satisfy the requirements of deep learning models for large-scale data while avoiding the leakage of raw user data [1], [2].This is expected to further promote development in the fields of recommendations [3], smart cities [4], and education [5].The increase in the scale of heterogeneity 1 has become a major obstacle to performance improvement.