Bachelor/Master Theses, Semester Projects, and DAS DS Capstone Projects

If you are interested in one of the following topics, please send an email to Prof. Bölcskei and include your complete transcripts. Please note that we can not respond to requests that do not contain your transcripts.

These projects serve to illustrate the general nature of projects we offer. You are most welcome to inquire directly with Prof. Bölcskei about tailored research projects. Likewise, please contact Prof. Bölcskei in case you are interested in a bachelor thesis project.

Also, we have a list of finished theses on our website.

List of Bachelor Projects (BA)

List of Semester Projects (SP)

List of Master Projects (MA)

Learning in indefinite spaces (MA)

In classical learning theory, a symmetric, positive semidefinite and continuous kernel function is used to construct a reproducing kernel Hilbert space, which serves as a hypothesis space for learning algorithms [1].

However, in many applications the kernel function fails to be positive semidefinite [2] which, in turn, leads to so-called (indefinite) Krein spaces [3]. The goal of this project is to develop a theory of learning for reproducing kernel Krein spaces.

Type of project: 100% theory
Prerequisites: Strong mathematical background, measure theory, functional analysis
Supervisor: Erwin Riegler
Professor: Helmut Bölcskei

[1] F. Cucker and D. X. Zhou, "Learning theory," ser. Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, 2007.

[2] R. Luss and A. d’Aspremont, "Support vector machine classification with indefinite kernels," Mathematical Programming Computation, vol. 1, no. 2-3, pp. 97–118, Oct. 2009.

[3] A. Gheondea, "Reproducing kernel Krein spaces," Chapter 14 in D. Alpay, Operator Theory, Springer, 2015.

Estimating covering numbers for RKHS (MA)

Covering numbers are a powerful mathematical tool for many problems in data science. For example, they yield upper bounds on the worst case error in quantization theory [1] or on the sample error in learning theory [2, 3]. In many applications in learning theory, the hypothesis space is a ball in a reproducing kernel Hilbert space (RKHS) [4]. The goal of this project is to establish results on the covering number of balls in RKHS in terms of the smoothness of the underlying kernel function.

Type of project: 100% theory
Prerequisites: Strong mathematical background, measure theory, functional analysis
Supervisor: Erwin Riegler
Professor: Helmut Bölcskei

[1] S. Graf and H. Luschgy, "Foundations of quantization for probability distributions," Lecture Notes in Mathematics, Springer, 2000.

[2] F. Cucker and D. X. Zhou, "Learning theory," Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, 2007.

[3] M. J. Wainwright, "High-dimensional statistics: A non-asymptotic viewpoint," Cambridge University Press, 2019.

[4] A. Berlinet and C. Thomas-Agnan, "Reproducing kernel Hilbert spaces in probability and statistics," Springer, 2004.

Automatic synopsis generation from amendment proposals for German law (MA)

Changes to German law are proposed in the form of amendments, which contain natural language instructions on how to change individual words or sentences within the current law (see e.g. [1]). For laypeople it is difficult to infer, from such proposals, the text of the law after the amendment is accepted, thus reducing the ability of the general public to participate in the legislative process [2]. The goal of this project is to develop a machine learning algorithm that reads the current version of the law as well as the proposed amendment and then produces the associated new version of the law. This will allow to automatically generate a synopsis that compares the previous and proposed versions (see [3] for an example).

Recently significant advances in machine translation and question answering were made using transformer networks that are pretrained on large unsupervised data sets [4, 5, 6]. Machine learning solutions for the specific task at hand have, however, not been studied previously. Significant new contributions will hence be required. In particular, the semi-structured nature of amendments might make it necessary to incorporate a copy mechanism [7, 8, 9]. In this project, you will have the opportunity to, first, make novel contributions to the field of natural language processing and, second, to develop a working algorithm that can be deployed online and used by the general public.

Type of project: 70% implementation/programming, 30% model development
Prerequisites: Experience with deep learning for natural language processing (NLP), knowledge of German
Supervisor: Clemens Hutter, Joseph Rumstadt
Professor: Helmut Bölcskei

[1] "Gesetz zur Modernisierung des notariellen Berufsrechts und zur Änderung weiterer Vorschriften." [Link to Document]

[2] F. Herbert, "Verfassungsblog: On matters constitutional," 2021, doi: 10.17176/20210305-033813-0. [Link to Document]

[3] "Synopse: Gesetz zur Modernisierung des notariellen Berufsrechts und zur Änderung weiterer Vorschriften." [Link to Document]

[4] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," Advances in Neural Information Processing Systems, pp. 5999–6009, 2017. [Link to Document]

[5] A. Radford, T. Narasimhan, T. Salimans, and I. Sutskever, "Improving language understanding by generative pre-training," Preprint, pp. 1–12, 2018. [Link to Document]

[6] J. Devlin, M. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of deep bidirectional transformers for language understanding," NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference, vol. 1, pp. 4171–4186, 2019. [Link to Document]

[7] J. Gu, Z. Lu, H. Li, and V. Li, "Incorporating copying mechanism in sequence-to-sequence learning," 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers, vol. 3, pp. 1631–1640, 2016, doi: 10.18653/v1/p16-1154. [Link to Document]

[8] A. See, P. Liu, and C. Manning, "Get to the point: Summarization with pointer-generator networks," ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), vol. 1, pp. 1073–1083, 2017, doi: 10.18653/v1/P17-1099. [Link to Document]

[9] B. McCann, N. Keskar, C. Xiong, and R. Socher, "The natural language decathlon: Multitask learning as question answering." [Link to Document]

Acoustic sensing and trajectory estimation of objects flying at supersonic speed (with industry) (SP)

In shooting sports, hunting, and law-enforcement applications measuring the speed and trajectory of projectile flight at high precision and reliability constitutes an important technical challenge. For supersonic projectiles these quantities are estimated from signals acquired by microphones placed at different locations. Recently, more powerful microprocessors have made it possible to employ more sophisticated algorithms.

The goal of this project is to investigate new techniques for the task at hand, such as linearization of non-linear systems of equations, least squares fitting, and neural network driven machine learning. Existing hardware and algorithms provide a starting point for the project, which will be carried out in collaboration with an industry partner called SIUS (located in Effretikon, Zurich). SIUS offers close supervision and the possibility to use hardware and a test laboratory.

About the industry partner: SIUS is the world’s leading manufacturer of electronic scoring systems in shooting sports. The company is specialized in producing high speed and high precision measurement equipment capable of measuring projectile position and trajectory and has been equipping the most important international competitions including the Olympic Games for decades.

Type of project: 20% literature research, 20% theory, 50% implementation/programming, 10% experiments
Prerequisites: Solid mathematical background, knowledge of SciPy, Matlab or a similar toolset, ideally knowledge on (deep) neural networks
Supervisor: Michael Lerjen, Steven Müllener
Professor: Helmut Bölcskei

[1] SIUS Homepage [Link to Document]

Deep ReLU network approximation rates (MA)

The compositional nature of deep neural networks allows for a systematic constructive approach to establishing good approximation rates for a wide range of classically considered function classes [1, 2, 3].

The goal of this project is to understand the techniques used in [2] and [3] and to subsequently employ them to characterize approximation rates for wavelet systems generated by a Daubechies wavelet [3, Sec. 5].

Type of project: 100% theory
Prerequisites: Strong mathematical background
Supervisor: Dennis Elbrächter
Professor: Helmut Bölcskei

[1] D. Yarotsky, "Error bounds for approximations with deep ReLU networks," Neural Networks, vol. 94, pp. 103–114, 2017. [Link to Document]

[2] D. Elbrächter, D. Perekrestenko, P. Grohs, and H. Bölcskei, "Deep neural network approximation theory," IEEE Transactions on Information Theory, vol. 67, no. 5, pp. 2581–2623, May 2021. [Link to Document]

[3] I. Daubechies, R. A. DeVore, N. Dym, S. Faigenbaum-Golovin, S. Z. Kovalsky, K.-C. Lin, J. Park, G. Petrova, and B. Sober, "Neural network approximation of refinable functions," IEEE Transactions on Information Theory, 2022. [Link to Document]

On the metric entropy of dynamical systems (MA/SP)

The aim of this project is to explore the metric entropy of dynamical systems, i.e., to identify how much information about a system's input-output behavior is needed to be able to describe the sytem dynamics to within a prescribed accuracy. In particular, you will study the asymptotics of ε-entropy in the Kolmogorov sense [1, 2] of a certain class of causal linear systems [3]. Based on these results you will try to develop a general theory that encompasses more general classes of dynamical systems, including time-varying systems [4] and nonlinear systems [5].

Type of project: 100% theory
Prerequisites: Strong mathematical background
Supervisor: Thomas Allard
Professor: Helmut Bölcskei

[1] A. N. Kolmogorov, "On certain asymptotic characteristics of completely bounded metric spaces," Doklady Akademii Nauk SSSR, vol. 108, no. 3, pp. 385–389, 1956.

[2] A. N. Kolmogorov and V. M. Tikhomirov, "ε-entropy and ε-capacity of sets in functional spaces," in Uspekhi Matematicheskikh Nauk, vol. 14, no. 2, pp. 3–86, 1959.

[3] G. Zames, "On the metric complexity of causal linear systems: ε-entropy and ε-dimension for continuous time," IEEE Transactions on Automatic Control, vol. 24, no. 2, pp. 222–230, 1979. [Link to Document]

[4] G. Matz, H. Bölcskei, and F. Hlawatsch, "Time-frequency foundations of communications," IEEE Signal Processing Magazine, vol. 30, no. 6, pp. 87–96, 2013. [Link to Document]

[5] M. Schetzen, "Nonlinear system modeling based on the Wiener theory," Proceedings of the IEEE, vol. 69, no. 12, pp. 1557–1573, 1981. [Link to Document]

Finite-precision neural networks (MA)

Deep feedforward neural networks with quantized real-valued weights can approximate a wide class of functions in Kolmogorov-optimal manner [1, 2]. These results do, however, not fully explain the success of neural networks in practical applications, where besides network weights also the signals in all layers of the network have to be stored in computer memories and hence have finite precision only.

The first step of the project is to generalize the theory developed in [1, 2] to neural networks with both weights and signals in all layers of finite precision and to establish fundamental limits on function approximation through such networks. Specifically, the new theory should be able to answer the question of how a given overall bit budget for operating the neural network should be distributed across the weights and signals in the network so as to minimize the end-to-end approximation error. The second major goal of the project is to identify function classes for which approximation through finite-precision neural networks achieves the fundamental limits identified in the first part.

The project is carried out in collaboration with Dr. Van Minh Nguyen in the form of an internship at Huawei Labs in Paris.

Type of project: 70% theory, 30% simulation
Prerequisites: Strong mathematical background and good programming skills
Supervisor: Weigutian Ou
Professor: Helmut Bölcskei

[1] H. Bölcskei, P. Grohs, G. Kutyniok, and P. Petersen, "Optimal approximation with sparsely connected deep neural networks," SIAM Journal on Mathematics of Data Science, vol. 1, no. 1, pp. 8–45, 2019. [Link to Document]

[2] D. Elbrächter, D. Perekrestenko, P. Grohs, and H. Bölcskei, "Deep neural network approximation theory," IEEE Transactions on Information Theory, vol. 67, no. 5, pp. 2581–2623, May 2021. [Link to Document]

Learning with general scattering networks (BA/SP)

Deep learning methods have dramatically improved state-of-the-art results on image classification, speech recognition, and other perceptual tasks in recent years.

However, common architectures typically require a significantly large number of trainable parameters and therefore a lot of training data and compute power.

The goal of this project is to design and implement general scattering networks for feature extraction [1] with drastically fewer trainable parameters than conventional networks (such as ResNet, Inception...) and to assess the performance of the resulting networks relative to fully trained networks. In particular, you will evaluate the performance on specific widely used datasets such as MNIST and ImageNet.

Type of project: 75% simulation, 25% theory
Prerequisites: Affinity for signal processing and functional analysis, programming skills
Supervisor: Ines Haymann
Professor: Helmut Bölcskei

[1] T. Wiatowski and H. Bölcskei, "A mathematical theory of deep convolutional neural networks for feature extraction," IEEE Transactions on Information Theory, vol. 64, no. 3, pp. 1845–1866, Mar. 2018. [Link to Document]

[2] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning", Nature, vol. 521, pp. 436–444, May 2015. [Link to Document]

The "logic" behind recurrent neural networks (MA)

Recurrent Neural Networks (RNNs) can simulate Turing machines [1]. Specifically, this is done through a clever mapping of the Turing machine’s logic to the architecture and the weights of the RNN. Conversely, it would be interesting to train RNNs in order to find the logical architecture of the underlying training data, thereby learning the “logical properties” of the mechanism generating the data. In essence this amounts to learning Boolean functions via RNNs. Following this philosophy, you will study the realization of certain classes of Boolean functions through RNNs [2], and you will devise RNN training algorithms to identify these functions. This will also entail studying literature on binary neural networks [4].


Type of project: 60% theory, 40% simulation
Prerequisites: Good programming skills, knowledge in machine learning and appetite for functional analysis
Supervisor: Valentin Abadie
Professor: Helmut Bölcskei

[1] H. T. Siegelmann and E. D. Sontag, "On the computational power of neural nets," Journal of Computer and System Sciences, Vol. 50(1), pp. 132–150, 1995. [Link to Document]

[2] R. O'Donnell, "Analysis of boolean functions," Cambridge University Press, 2014. [Link to Document]

[3] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, Vol. 521(7553), pp. 436–444, 2015. [Link to Document]

[4] H. Qin, R. Gong, X. Liu, X. Bai, J. Song, and N. Sebe, "Binary neural networks: A survey," Pattern Recognition, Vol. 105, 107281, 2020. [Link to Document]