Bachelor/Master Theses, Semester Projects, and DAS DS Capstone Projects
If you are interested in one of the following topics, please send an email to Prof. Bölcskei and include your complete transcripts. Please note that we can not respond to requests that do not contain your transcripts.
These projects serve to illustrate the general nature of projects we offer. You are most welcome to inquire directly with Prof. Bölcskei about tailored research projects. Likewise, please contact Prof. Bölcskei in case you are interested in a bachelor thesis project.
Also, we have a list of ongoing and finished theses on our website.
List of Semester Projects (SP)
- Acoustic sensing and trajectory estimation of objects flying at supersonic speed (with industry)
- On the metric entropy of dynamical systems
- Learning with general scattering networks
- The "logic" behind recurrent neural networks
List of Master Projects (MA)
- Learning in indefinite spaces
- Estimating covering numbers for RKHS
- Automatic synopsis generation from amendment proposals for German law
- Deep ReLU network approximation rates
- Learning cellular automaton transition rules with recurrent neural networks
- On the metric entropy of dynamical systems
- Finite-precision neural networks
- The "logic" behind recurrent neural networks
Learning in indefinite spaces (MA)

However, in many applications the kernel function fails to be positive semidefinite [2] which, in turn, leads to so-called (indefinite) Krein spaces [3]. The goal of this project is to develop a theory of learning for reproducing kernel Krein spaces.
Type of project: 100% theory
Prerequisites: Strong mathematical background, measure theory, functional analysis
Supervisor: Erwin Riegler
Professor:
Helmut Bölcskei
References:
[1] F. Cucker and D. X. Zhou, "Learning theory," ser. Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, 2007.
[2] R. Luss and A. d’Aspremont, "Support vector machine classification with indefinite kernels," Mathematical Programming Computation, vol. 1, no. 2-3, pp. 97–118, Oct. 2009.
[3] A. Gheondea, "Reproducing kernel Krein spaces," Chapter 14 in D. Alpay, Operator Theory, Springer, 2015.
Estimating covering numbers for RKHS (MA)
Type of project: 100% theory
Prerequisites: Strong mathematical background, measure theory, functional analysis
Supervisor: Erwin Riegler
Professor:
Helmut Bölcskei
References:
[1] S. Graf and H. Luschgy, "Foundations of quantization for probability distributions," Lecture Notes in Mathematics, Springer, 2000.
[2] F. Cucker and D. X. Zhou, "Learning theory," Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, 2007.
[3] M. J. Wainwright, "High-dimensional statistics: A non-asymptotic viewpoint," Cambridge University Press, 2019.
[4] A. Berlinet and C. Thomas-Agnan, "Reproducing kernel Hilbert spaces in probability and statistics," Springer, 2004.
Automatic synopsis generation from amendment proposals for German law (MA)
Recently significant advances in machine translation and question answering were made using transformer networks that are pretrained on large unsupervised data sets [4, 5, 6]. Machine learning solutions for the specific task at hand have, however, not been studied previously. Significant new contributions will hence be required. In particular, the semi-structured nature of amendments might make it necessary to incorporate a copy mechanism [7, 8, 9]. In this project, you will have the opportunity to, first, make novel contributions to the field of natural language processing and, second, to develop a working algorithm that can be deployed online and used by the general public.
Type of project: 70% implementation/programming, 30% model development
Prerequisites: Experience with deep learning for natural language processing (NLP), knowledge of German
Supervisor: Clemens Hutter, Joseph Rumstadt
Professor:
Helmut Bölcskei
References:
[1]
"Gesetz zur Modernisierung des notariellen Berufsrechts und zur Änderung weiterer Vorschriften."
[Link to Document]
[2] F. Herbert, "Verfassungsblog: On matters constitutional," 2021, doi: 10.17176/20210305-033813-0. [Link to Document]
[3] "Synopse: Gesetz zur Modernisierung des notariellen Berufsrechts und zur Änderung weiterer Vorschriften." [Link to Document]
[4] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," Advances in Neural Information Processing Systems, pp. 5999–6009, 2017. [Link to Document]
[5] A. Radford, T. Narasimhan, T. Salimans, and I. Sutskever, "Improving language understanding by generative pre-training," Preprint, pp. 1–12, 2018. [Link to Document]
[6] J. Devlin, M. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of deep bidirectional transformers for language understanding," NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference, vol. 1, pp. 4171–4186, 2019. [Link to Document]
[7] J. Gu, Z. Lu, H. Li, and V. Li, "Incorporating copying mechanism in sequence-to-sequence learning," 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers, vol. 3, pp. 1631–1640, 2016, doi: 10.18653/v1/p16-1154. [Link to Document]
[8] A. See, P. Liu, and C. Manning, "Get to the point: Summarization with pointer-generator networks," ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), vol. 1, pp. 1073–1083, 2017, doi: 10.18653/v1/P17-1099. [Link to Document]
[9] B. McCann, N. Keskar, C. Xiong, and R. Socher, "The natural language decathlon: Multitask learning as question answering." [Link to Document]
Acoustic sensing and trajectory estimation of objects flying at supersonic speed (with industry) (SP)

The goal of this project is to investigate new techniques for the task at hand, such as linearization of non-linear systems of equations, least squares fitting, and neural network driven machine learning. Existing hardware and algorithms provide a starting point for the project, which will be carried out in collaboration with an industry partner called SIUS (located in Effretikon, Zurich). SIUS offers close supervision and the possibility to use hardware and a test laboratory.
About the industry partner: SIUS is the world’s leading manufacturer of electronic scoring systems in shooting sports. The company is specialized in producing high speed and high precision measurement equipment capable of measuring projectile position and trajectory and has been equipping the most important international competitions including the Olympic Games for decades.
Type of project: 20% literature research, 20% theory, 50% implementation/programming, 10% experiments
Prerequisites: Solid mathematical background, knowledge of SciPy, Matlab or a similar toolset, ideally knowledge on (deep) neural networks
Supervisor: Michael Lerjen, Steven Müllener
Professor:
Helmut Bölcskei
References:
[1] SIUS Homepage [Link to Document]
Deep ReLU network approximation rates (MA)
The goal of this project is to understand the techniques used in [2] and [3] and to subsequently employ them to characterize approximation rates for wavelet systems generated by a Daubechies wavelet [3, Sec. 5].
Type of project: 100% theory
Prerequisites: Strong mathematical background
Supervisor: Dennis Elbrächter
Professor:
Helmut Bölcskei
References:
[1]
D. Yarotsky, "Error bounds for approximations with deep ReLU networks,"
Neural Networks, vol. 94, pp. 103–114, 2017.
[Link to Document]
[2] D. Elbrächter, D. Perekrestenko, P. Grohs, and H. Bölcskei, "Deep neural network approximation theory," IEEE Transactions on Information Theory, vol. 67, no. 5, pp. 2581–2623, May 2021. [Link to Document]
[3] I. Daubechies, R. A. DeVore, N. Dym, S. Faigenbaum-Golovin, S. Z. Kovalsky, K.-C. Lin, J. Park, G. Petrova, and B. Sober, "Neural network approximation of refinable functions," IEEE Transactions on Information Theory, vol. 69, no. 1, pp. 482–495, January 2023. [Link to Document]
Learning cellular automaton transition rules with recurrent neural networks (MA)
A cellular automaton (CA) is a discrete dynamical system consisting of a regular lattice in one or more dimensions with cell values taken from a finite set. The cells change their states at synchronous discrete time steps based on a transition rule [1]. Despite the simplicity of the CA model, it can exhibit complex global behavior. With suitably chosen transition rules, cellular automata can simulate a plethora of dynamical behaviors [2, 3]. The inverse problem of deducing the transition rule from a given global behavior is extremely difficult [4]. In this project, you will investigate the possibility of training deep recurrent neural networks to learn CA transition rules.
Type of project: 30% theory, 70% implementation
Prerequisites: Good programming skills, knowledge in machine learning
Supervisor: Yani Zhang
Professor:
Helmut Bölcskei
References:
[1]
J. Kari, “Theory of cellular automata: A survey,” Theoretical computer science, 334(1-3):3–33, 2005.
[Link to Document]
[2] T. Toffoli and N. Margolus, “Cellular automata machines: A new environment for modeling,” MIT press, 1987. [Link to Document]
[3] A. Adamatzky, “Game of life cellular automata,” vol. 1, Springer, 2010. [Link to Document]
[4] N. Ganguly, B. K. Sikdar, A. Deutsch, G. St. John Canright, and P. P. Chaudhuri, “A survey on cellular automata,” 2003. [Link to Document]
On the metric entropy of dynamical systems (MA/SP)

Type of project: 100% theory
Prerequisites: Strong mathematical background
Supervisor: Thomas Allard
Professor:
Helmut Bölcskei
References:
[1]
A. N. Kolmogorov, "On certain asymptotic characteristics of completely bounded metric spaces," Doklady Akademii Nauk SSSR, vol. 108, no. 3, pp. 385–389, 1956.
[2] A. N. Kolmogorov and V. M. Tikhomirov, "ε-entropy and ε-capacity of sets in functional spaces," in Uspekhi Matematicheskikh Nauk, vol. 14, no. 2, pp. 3–86, 1959.
[3] G. Zames, "On the metric complexity of causal linear systems: ε-entropy and ε-dimension for continuous time," IEEE Transactions on Automatic Control, vol. 24, no. 2, pp. 222–230, 1979. [Link to Document]
[4] G. Matz, H. Bölcskei, and F. Hlawatsch, "Time-frequency foundations of communications," IEEE Signal Processing Magazine, vol. 30, no. 6, pp. 87–96, 2013. [Link to Document]
[5] M. Schetzen, "Nonlinear system modeling based on the Wiener theory," Proceedings of the IEEE, vol. 69, no. 12, pp. 1557–1573, 1981. [Link to Document]
Finite-precision neural networks (MA)
The first step of the project is to generalize the theory developed in [1, 2] to neural networks with both weights and signals in all layers of finite precision and to establish fundamental limits on function approximation through such networks. Specifically, the new theory should be able to answer the question of how a given overall bit budget for operating the neural network should be distributed across the weights and signals in the network so as to minimize the end-to-end approximation error. The second major goal of the project is to identify function classes for which approximation through finite-precision neural networks achieves the fundamental limits identified in the first part.
The project is carried out in collaboration with Dr. Van Minh Nguyen in the form of an internship at Huawei Labs in Paris.
Type of project: 70% theory, 30% simulation
Prerequisites: Strong mathematical background and good programming skills
Supervisor: Weigutian Ou
Professor:
Helmut Bölcskei
References:
[1]
H. Bölcskei, P. Grohs, G. Kutyniok, and P. Petersen, "Optimal approximation with sparsely connected deep neural networks," SIAM Journal on Mathematics of Data Science, vol. 1, no. 1, pp. 8–45, 2019.
[Link to Document]
[2] D. Elbrächter, D. Perekrestenko, P. Grohs, and H. Bölcskei, "Deep neural network approximation theory," IEEE Transactions on Information Theory, vol. 67, no. 5, pp. 2581–2623, May 2021. [Link to Document]
Learning with general scattering networks (SP)

However, common architectures typically require a significantly large number of trainable parameters and therefore a lot of training data and compute power.
The goal of this project is to design and implement general scattering networks for feature extraction [1] with drastically fewer trainable parameters than conventional neural networks (such as e.g. ResNet or Inception) and to assess the performance of the resulting networks relative to fully trained networks. In particular, you will evaluate the performance on specific widely used datasets such as MNIST and ImageNet.
Type of project: 75% simulation, 25% theory
Prerequisites: Affinity for signal processing and functional analysis, programming skills
Supervisor: Ines Haymann
Professor:
Helmut Bölcskei
References:
[1]
T. Wiatowski and H. Bölcskei, "A mathematical theory of deep convolutional neural networks for feature extraction," IEEE Transactions on Information Theory, vol. 64, no. 3, pp. 1845–1866, Mar. 2018.
[Link to Document]
[2] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning", Nature, vol. 521, pp. 436–444, May 2015. [Link to Document]
The "logic" behind recurrent neural networks (MA/SP)

Type of project: 60% theory, 40% simulation
Prerequisites: Good programming skills, knowledge in machine learning and appetite for functional analysis
Supervisor: Valentin Abadie
Professor:
Helmut Bölcskei
References:
[1]
H. T. Siegelmann and E. D. Sontag, "On the computational power of neural nets," Journal of Computer and System Sciences, Vol. 50(1), pp. 132–150, 1995.
[Link to Document]
[2] R. O'Donnell, "Analysis of Boolean functions," Cambridge University Press, 2014. [Link to Document]
[3] Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," Nature, Vol. 521(7553), pp. 436–444, 2015. [Link to Document]
[4] H. Qin, R. Gong, X. Liu, X. Bai, J. Song, and N. Sebe, "Binary neural networks: A survey," Pattern Recognition, Vol. 105, 107281, 2020. [Link to Document]