Below is the order of the student presentations (they’re alphabetical by last name):
- Jitendra Bhandiwad (Wed, Mar 7)
- Gang Chen (Fri, Mar 2)
- Nishant Dholakia (Fri, Mar 9)
- Yuqi Fan (Wed, Mar 21)
- Anup Kumar (Wed, Mar 28)
- Viswanath Lolla (Fri, Mar 30)
- Divyansh Madan (Wed, Apr 4)
- Harinath Pathuri (Fri, Apr 6)
- Kushal Satrasala (Wed, Apr 11)
- Ming Shao (Fri, Apr 13)
- Abhijit Srinivas (Wed, Apr 18)
- Manikandan Sundaram (Fri, Apr 20)
- Jiun-Jie Wang (Wed, Apr 25)
- Zilong Ye (Fri, Apr 27)
I will keep this post as sticky so that it is always the first post and will update the above with the presentation date once they are decided. As I mentioned in class typically on a Friday that you will present next Wednesday.
In today’s lecture, we setup things for Jitendra’s presentation next Wednesday, where he will prove that the OMP algorithm computes the optimal solution, when has a -sparse representation in the columns of .
Today’s material was from Anna Gilbert’s Lecture 5+6 notes.
As I mentioned at the end of the lecture, the proof that Jitendra will present uses properties of the marix norm. Again, the wikipedia page on the matrix norm has the relevant results (we’ll use what is called “Induced Norm” on that page).
On Wednesday, we saw the Orthogonal Matching Pursuit (OMP) algorithm. We also looked at the least squares problem. The Wikipedia page on the least squares problem has the derivation I did in class (and more).
Rest of the material was from Anna Gilbert’s Lecture 5+6 notes.
In the lecture on Friday, Feb 10, we saw the notion of approximation algorithms and the notion of coherence of a redundant dictionary. The material were from Lectures 4 and 5 from Anna Gilbert’s notes.
A gentle reminder that we will not have class next week (i.e. Feb 15, 17). As “makeup” I encourage you to try and work out problems in HW 1 from Anna Gilbert’s course. Please do not submit the HW– this is just an opportunity for you to review the material we have covered so far.
On Wednesday, we finished the reduction from X3C to the decision version of EXACT. The material was from Lecture 4 from Anna Gilbert’s notes.
In today’s lecture, we saw an algorithm to solve the ERROR problem for the case of orthogonal matrices. Then we saw an overview of NP-Completeness. The stuff on ERROR is from Lecture 3 from Anna Gilbert’s notes. For a more detailed overview of NP-completeness (and different examples that we saw in class), see Lecture 4 from Anna Gilbert’s notes.
We saw how to solve the EXACT and SPARSE problem for the case when is orthogonal. The material is from Lecture 3 from Anna Gilbert’s notes.