| 21. Graph partitioning | blank notes | annotated notes |
| spectral_partitioning.ipynb | notebook file | HTML version |
Are there various forms of the spectral partitioning algorithm? If so, what are they, how do they apply to problems, and what makes them different from the one we are discussing in class?
There are various ways in which spectral partitioning can be used. One can use several eigenvectors (instead of just one) either to split a graph into more than two pieces, or to improve the results of splitting into two pieces. It is also possible to apply spectral partitioning to the situation when either edges or vertices have some associated weights. There are other variations too.
- Why do we use a Laplacian matrix to look at networks, and how is it different from just a list of which points are connected?
- How does drawing a network help us figure out how to break it into parts?
Are all square matrices symmetric?
No.
Why orthonormal is important with condition (1’) and (3)
Orthonormality is not important for stating these conditions, but the fact that the Laplacian matrix is symmetric, and so it is orthonormally diagonalizable, helps to find a solution to the relaxed partitioning problem.
How do we determine the optimal number of partitions for a given graph?
The spectral partitioning method as I explained it, splits a graph into two parts. It is possible to modify it to get a splitting into a bigger number of pieces. One such modification involves using eigenvectors of the Laplacian together with the k-means clustering algorithm. In such case, it is possible to use some techniques related to k-means, such as the elbow method, to try to estimate the optimal number of partitions.
What are the benefits of spectral partitioning?
Partitioning by itself provides useful information about a graph, which groups of vertices are more tightly connected and also how difficult it is to split the graph into pieces. Since computing the exact optimal partitioning of a graph is difficult, spectral partitioning is one of the methods used to get an approximation of the optimal partitionining in a way that is computationally efficient.
Is there any way to tell quickly just by looking at a matrix that it has \(n\) linearly independent eigenvectors?
For matrices that are in some special forms (e.g. are symmetric) it is possible. In general, no.
- What happens when the value of \(\lambda_2\) is zero?
- How do you calculate vector \(u_2\) (from the example we did) if we did not have the code for it?
How minimizing the Cheeger constant can lead to balanced partitions in a graph? Is it because of the minimized edges in graph partitioning?
The Cheeger constant is a number that can be computed for a graph, and that expresses how tightly connected the graph is. Bigger Cheeger constants means that we have to cut more edges in order to partition the graph. The Cheeger constant has a fixed value for each graph, so it cannot be minimized.