Bachelor/Master Theses, Semester Projects, and DAS DS Capstone Projects

If you are interested in one of the following topics, please send an email to Prof. Bölcskei and include your complete transcripts.

These projects serve to illustrate the general nature of projects we offer. You are most welcome to inquire directly with Prof. Bölcskei about tailored research projects. Likewise, please contact Prof. Bölcskei in case you are interested in a bachelor thesis project.

Also, we have a list of finished theses on our website.

List of Bachelor Projects (BA)

List of Semester Projects (SP)

List of Master Projects (MA)



Learning in indefinite spaces (MA)

eriegler_kegel.png
In classical learning theory, a symmetric, positive semidefinite and continuous kernel function is used to construct a reproducing kernel Hilbert space, which serves as a hypothesis space for learning algorithms [1].

However, in many applications, the kernel function fails to be positive semidefinite [2] and leads to indefinite Krein spaces [3]. The goal of this project is to develop a theory of learning for reproducing kernel Krein spaces.

Type of project: 100% theory
Prerequisites: Strong mathematical background, measure theory, functional analysis
Supervisor: Erwin Riegler
Professor: Helmut Bölcskei

References:
[1] F. Cucker and D. X. Zhou, "Learning Theory," ser. Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, 2007.

[2] R. Luss and A. d’Aspremont, "Support vector machine classification with indefinite kernels," Mathematical Programming Computation, vol. 1, no. 2-3, pp. 97–118, Oct. 2009.

[3] A. Gheondea, "Reproducing kernel Krein spaces," Chapter 14 in D. Alpay, Operator Theory, Springer, 2015.



Neural collapse (SP/MA)

wou_neural-collapse.png
Recent experiments show that the last-layer features of prototypical neural networks trained by stochastic gradient descent on common classification datasets favor certain symmetric and geometric patterns. Moreover, the networks develop towards these patterns even when training is continued after the training error has already been driven to zero [1]. For example, the individual last-layer features collapse to their class means.

These patterns, appearing in empirical network training, can be used to explain the generalization ability of deep networks. The phenomenon is called "neural collapse" and constitutes a type of inductive/implicit bias, which has been studied mathematically for linear networks trained by gradient descent on classification datasets [2,3]. The goal of this project is to develop a general mathematical theory for the "neural collapse" phenomenon.

Type of project: 80% theory, 20% simulation
Prerequisites: Strong mathematical background and programming skills
Supervisor: Weigutian Ou
Professor: Helmut Bölcskei

References:
[1] V. Papyan, X. Y. Han, and D. L. Donoho, “Prevalence of neural collapse during the terminal phase of deep learning training,” Proceedings of the National Academy of Sciences, vol. 117, no. 40, pp. 24652–24663, 2020. [Link to Document]

[2] D. Soudry, E. Hoffer, M. S. Nacson, S. Gunasekar, and N. Srebro, “The implicit bias of gradient descent on separable data,” The Journal of Machine Learning Research, vol. 19, no. 1, pp. 2822–2878, 2018. [Link to Document]

[3] Z. Ji and M. J. Telgarsky, “Gradient descent aligns the layers of deep linear networks,” Proceedings of the 7th International Conference on Learning Representations, ICLR, 2019. [Link to Document]



Automatic synopsis generation from amendment proposals for German law (MA)

huttercl_synopsis.svg
Changes to German law are proposed in the form of amendments, which contain natural language instructions on how to change individual words or sentences within the current law (see [1] for example). For laypeople, it is difficult to infer from such proposals the text of the law after the amendment is accepted, thus reducing the ability of the general public to participate in the legislative process [2]. The goal of this project is to develop a machine learning algorithm that reads the current version of the law as well as the proposed amendment and then produces the new version of the law. This will allow to automatically generate a synopsis that compares the previous and proposed versions (see [3] for an example).

Recently significant advances in machine translation and question answering were made using transformer networks that are pretrained on large unsupervised data sets [4, 5, 6]. Machine learning solutions for the specific task at hand have, however, not been studied previously. Significant new contributions will hence be required. In particular, the semi-structured nature of amendments might make it necessary to incorporate a copy mechanism [7, 8, 9]. In this project, you will have the opportunity to, first, make novel contributions to the field of natural language processing and, second, to develop a working algorithm that can be deployed online and used by the general public.

Type of project: 70% implementation/programming, 30% model development
Prerequisites: Experience with deep learning for NLP, knowledge of German
Supervisor: Clemens Hutter, Joseph Rumstadt
Professor: Helmut Bölcskei

References:
[1] "Gesetz zur Modernisierung des notariellen Berufsrechts und zur Änderung weiterer Vorschriften." [Link to Document]

[2] F. Herbert, "Verfassungsblog: On matters constitutional," 2021, doi: 10.17176/20210305-033813-0. [Link to Document]

[3] "Synopse: Gesetz zur Modernisierung des notariellen Berufsrechts und zur Änderung weiterer Vorschriften." [Link to Document]

[4] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," Advances in Neural Information Processing Systems, pp. 5999–6009, 2017. [Link to Document]

[5] A. Radford, T. Narasimhan, T. Salimans, and I. Sutskever, "Improving language understanding by generative pre-training," Preprint, pp. 1–12, 2018. [Link to Document]

[6] J. Devlin, M. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of deep bidirectional transformers for language understanding," NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference, vol. 1, pp. 4171–4186, 2019. [Link to Document]

[7] J. Gu, Z. Lu, H. Li, and V. Li, "Incorporating copying mechanism in sequence-to-sequence learning," 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers, vol. 3, pp. 1631–1640, 2016, doi: 10.18653/v1/p16-1154. [Link to Document]

[8] A. See, P. Liu, and C. Manning, "Get to the point: Summarization with pointer-generator networks," ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), vol. 1, pp. 1073–1083, 2017, doi: 10.18653/v1/P17-1099. [Link to Document]

[9] B. McCann, N. Keskar, C. Xiong, and R. Socher, "The natural language decathlon: Multitask learning as question answering." [Link to Document]



Acoustic sensing and trajectory estimation of objects flying at supersonic speed (with industry) (SP)

mlerjen_scoringSystem.png
In shooting sports, hunting, and law-enforcement applications measuring the speed and trajectory of projectile flight at high precision and reliability is an important technical challenge. For supersonic projectiles these quantities are estimated from signals acquired by microphones placed at different locations. Recently, more powerful microprocessors have made it possible to employ more sophisticated algorithms.

The goal of this project is to investigate techniques such as linearization of non-linear systems of equations, least squares fitting, and neural network driven machine learning. Existing hardware and algorithms provide an ideal starting point for the project, which will be carried out in collaboration with an industry partner called SIUS (located in Effretikon, Zurich). SIUS offers close supervision, and the possibility to use real hardware and a test laboratory.

About the industry partner: SIUS is the world’s leading manufacturer of electronic scoring systems in shooting sports. The company is specialized in producing high speed and high precision measurement equipment capable of measuring projectile position and trajectory and has been equipping the most important international competitions including the Olympic Games for decades.

Type of project: 20% literature research, 20% theory, 50% implementation/programming, 10% experiments
Prerequisites: Solid mathematical background, knowledge of SciPy, Matlab or a similar toolset, ideally knowledge on (deep) neural networks
Supervisor: Michael Lerjen, Steven Müllener
Professor: Helmut Bölcskei

References:
[1] SIUS Homepage [Link to Document]



Double descent curves in machine learning (MA)

wou_double_descent_curves.png
Classical machine learning theory suggests that the generalization error follows a U-shaped curve as a function of the model complexity [1, Sec 2.9.]. When too few parameters are used to train the model, the generalization error is high due to underfitting. Too many parameters result in overfitting and hence again in a large generalization error. There exists a sweet spot at the bottom of the so-called U-shaped curve. During the past few years, it was observed that increasing the model complexity beyond the so-called interpolation threshold leads to a generalization error that starts decreasing again [2]. The overall generalization error hence follows a so-called double descent curve. To date, there are only experimental results indicating the double descent behavior. These experiments employ vastly different complexity measures and learning algorithms. The goal of this project is to first understand the experiments reported in the literature. Then, you will study the theory of metric entropy [3] and you will try to understand whether, and if so under which learning algorithms, a double descent curve appears when model complexity is measured in terms of metric entropy.

Type of project: 70% theory, 30% simulation
Prerequisites: Programming skills and knowledge in machine learning
Supervisor: Weigutian Ou
Professor: Helmut Bölcskei

References:
[1] J. Friedman, T. Hastie, and R. Tibshirani, "The elements of statistical learning," Springer Series in Statistics, vol. 1, Springer, New York, 2001.

[2] M. Belkin, D. Hsu, S. Ma, and S. Mandal, "Reconciling modern machine-learning practice and the classical bias–variance trade-off," Proceedings of the National Academy of Sciences, 116(32):15849–15854, 2019. [Link to Document]

[3] D. Elbrächter, D. Perekrestenko, P. Grohs, and H. Bölcskei, "Deep neural network approximation theory," IEEE Transactions on Information Theory, vol. 67, no. 5, pp. 2581–2623, May 2021. [Link to Document]



Deep ReLU network approximation rates (MA)

delbraechter_approximation.png
The compositional nature of deep neural networks allows for a systematic constructive approach to establishing good approximation rates for a wide range of classically considered function classes [1, 2, 3].

The goal of this project is to understand the techniques used in [2] and [3] and to subsequently employ them to characterize approximation rates for Daubechies wavelets. (If so inclined, one could also choose some other interesting function classes.)

Type of project: 100% theory
Prerequisites: Strong mathematical background
Supervisor: Dennis Elbrächter
Professor: Helmut Bölcskei

References:
[1] D. Yarotsky, "Error bounds for approximations with deep ReLU networks," Neural Networks, vol. 94, pp. 103–114, 2017. [Link to Document]

[2] D. Elbrächter, D. Perekrestenko, P. Grohs, and H. Bölcskei, "Deep neural network approximation theory," IEEE Transactions on Information Theory, vol. 67, no. 5, pp. 2581–2623, May 2021. [Link to Document]

[3] I. Daubechies, R. A. DeVore, N. Dym, S. Faigenbaum-Golovin, S. Z. Kovalsky, K.-C. Lin, J. Park, G. Petrova, and B. Sober, "Neural network approximation of refinable functions," arxiv:2105.12806, 2021. [Link to Document]



On the metric entropy of dynamical systems (MA/SP)

tallard_surface.png
The aim of this project is to explore the metric complexity of dynamical systems, i.e., to identify how much information about a system's input-output behavior is needed to be able to describe the sytem dynamics to within a prescribed accuracy. In particular, you will study the asymptotics of ε-entropy in the Kolmogorov sense [1, 2] of a certain class of causal linear systems [3]. Based on these results you will try to develop a general theory that encompasses more general classes of dynamical systems, including time-varying systems [4] and nonlinear systems [5].

Type of project: 100% theory
Prerequisites: Strong mathematical background
Supervisor: Thomas Allard
Professor: Helmut Bölcskei

References:
[1] A. N. Kolmogorov, "On certain asymptotic characteristics of completely bounded metric spaces," Doklady Akademii Nauk SSSR, vol. 108, no. 3, pp. 385–389, 1956.

[2] A. N. Kolmogorov and V. M. Tikhomirov, "ε-entropy and ε-capacity of sets in functional spaces," in Uspekhi Matematicheskikh Nauk, vol. 14, no. 2, pp. 3–86, 1959.

[3] G. Zames, "On the metric complexity of causal linear systems: ε-entropy and ε-dimension for continuous time," IEEE Transactions on Automatic Control, vol. 24, no. 2, pp. 222–230, 1979. [Link to Document]

[4] G. Matz, H. Bölcskei, and F. Hlawatsch, "Time-frequency foundations of communications," IEEE Signal Processing Magazine, vol. 30, no. 6, pp. 87–96, 2013. [Link to Document]

[5] M. Schetzen, "Nonlinear system modeling based on the Wiener theory," Proceedings of the IEEE, vol. 69, no. 12, pp. 1557–1573, 1981. [Link to Document]



Concentration of measure phenomena in machine learning (MA)

delbraechter_concentration.png
Things tend to get weird in high dimensions [1, 2]. We would like to understand why and how.

One example of this weirdness is the observation that, with increasing dimension, Lipschitz continuous functions become almost constant on inputs sampled according to so-called isoperimetric measures (e.g. the Gaussian distribution). In [3] the problem of fitting noisy data—with 𝑛 data points in 𝑑-dimensional space sampled according to an isoperimetric measure—to an error below the noise level is considered. It is shown that the number of parameters of solutions, which are stably parametrized and (Lipschitz-)robust, must scale at least like 𝑛𝑑.

The goal of this project is to understand the arguments in [3] and to subsequently establish either a variation of these results or, ideally, a novel result based on a concentration of measure phenomenon [1].

Type of project: 100% theory
Prerequisites: Strong mathematical background
Supervisor: Dennis Elbrächter
Professor: Helmut Bölcskei

References:
[1] A. S. Bandeira, A. Singer, T. Strohmer, “Mathematics of data science (draft)”. [Link to Document]

[2] R. Vershynin, “High-dimensional probability: An introduction with applications in data science" (Cambridge series in statistical and probabilistic mathematics), Cambridge University Press, 2018. [Link to Document]

[3] S. Bubeck and M. Selke, “A universal law of robustness via isoperimetry,” arxiv:2105.12806, 2021. [Link to Document]



Learning with general scattering networks (BA/SP)

haymanni_learningcompeff.png
Deep learning methods have dramatically improved state-of-the-art results on image classification, speech recognition, and other perceptual tasks in recent years.

However, common architectures typically require a significant amount of trainable parameters and therefore a lot of training data/computing power.

The goal of this project is to design and implement general scattering networks [1] with much fewer trainable parameters than conventional networks (such as ResNet, Inception...) and to assess their performance relative to fully trained nets, for specific classes of functions (e.g. MNIST, ImageNet ...).

Type of project: 75% simulation, 25% theory
Prerequisites: Affinity for signal processing or analysis, programming skills
Supervisor: Ines Haymann
Professor: Helmut Bölcskei

References:
[1] T. Wiatowski and H. Bölcskei, "A mathematical theory of deep convolutional neural networks for feature extraction," IEEE Transactions on Information Theory, vol. 64, no. 3, pp. 1845–1866, Mar. 2018. [Link to Document]

[2] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning", Nature, vol. 521, pp. 436–444, May 2015. [Link to Document]