Jialu's notes and blog
jialujialu.github.io
Scribe notes on algorithms and complexities<p>Here are some notes I scribed for courses I take during the first year of my Ph.D. study.
These notes have not been scrutinized as peer-reviewed works would have, and I am responsible for any mistakes within it.</p>
<ul>
<li>
<p><a href="/assets/pdf_scribes/787GD.pdf">Continuous Optimization: Gradient Descent</a>.
This is a lecture from CS 787 Advanced Algorithms. We learned to show that gradient descent converges very fast on functions that are strongly convex and smooth.</p>
</li>
<li>
<p><a href="/assets/pdf_scribes/880LP_relaxation.pdf">LP relaxation and rounding</a>.
This is a lecture from <a href="http://pages.cs.wisc.edu/~shuchi/courses/880-F19/index.html">CS 880</a> Approximation and online algorithms. It is about the technique to approximate solutions for an integer linear program by relaxing the problem to linear program and solving the relaxed version of the problem.</p>
</li>
<li><a href="/assets/pdf_scribes/710Lec5.pdf">Random walk and hitting time</a>.
In CS 710 Computational Complexity, besides conventional materials from complexities, Prof. <a href="http://pages.cs.wisc.edu/~jyc/">Jin-Yi Cai</a> also gave us a taste of some beautiful results from random walks. This lecture a beautiful theorem not included in this scribe:
<blockquote>
<p><strong>Theorem</strong>(by <a href="https://research.utwente.nl/en/publications/random-walks-on-graphs">Gobel and Jagers</a>.)
In any undirected connected graph \( G = (E,V) \),
any two nodes \( i, j \in V\), two edges \({u, v}, {u’, v’} \in V \),
\[\theta_{i,j,u,v} = \theta_{i,j,u’,v’},\]
where \(\theta_{i,j,u,v}\) is the expected number of times that
a random \( (i,j)\) commute traverses the edge \({u,v}\),
and an \( (i,j)\) commute is a sequence of vertices that can be divided into two phases:</p>
<ul>
<li>Phase 1: starting from vertex \(i\), ending with vertex \(j\), with no visits of \(j\) in between.</li>
<li>Phase 2: starting from vertex \(j\), ending with vertex \(j\), with no visits of \(i\) in between.</li>
</ul>
</blockquote>
</li>
<li>
<p><a href="/assets/pdf_scribes/710Lec9.pdf">Polynomial Hierarchy Theorem and Alternating Turing Machine</a>.</p>
</li>
<li>
<p><a href="/assets/pdf_scribes/710Lec12.pdf">Chee Yap Theorem, Mahaney’s theorem, and Tail bound</a>.</p>
</li>
<li><a href="/assets/pdf_scribes/710Lec17.pdf">Valiant-Vazirani Theorem and Efficient Amplification with Hashing mixing lemma</a>.
These three notes touch on cool theorems that show unobvious conditions that could render
<ul>
<li>\(\Sigma_2^P = \Pi_2^P \), see <a href="/assets/pdf_scribes/710Lec9.pdf">Polynomial Hierarchy Theorem</a>;</li>
<li>or \(\Sigma_3^P = \Pi_3^P \), see <a href="/assets/pdf_scribes/710Lec12.pdf">Chee Yap Theorem</a>.</li>
<li>or \(NP = P\): see <a href="/assets/pdf_scribes/710Lec12.pdf">Mahaney’s theorem</a>.</li>
<li>or \(NP = RP \), randomized polynomial time : see <a href="/assets/pdf_scribes/710Lec17.pdf">Valiant-Vazirani Theorem</a>.</li>
</ul>
</li>
</ul>
<p>(For the \(\Sigma\) and \(\Pi\) notations, I found Wikipedia page on
<a href="https://en.wikipedia.org/wiki/Polynomial_hierarchy">polynomial hierarchy</a>
and <a href="http://www.cs.cornell.edu/courses/cs6810/2009sp/scribe/lecture5.pdf">this scribe</a> helpful. )</p>
Thu, 23 Jul 2020 08:05:00 +0000
jialujialu.github.io//notes/scribes/
jialujialu.github.io//notes/scribes/