Where to go from here#
Now that you have taken the first steps in machine learning with neural networks, what is next?
Immediate steps#
Using what you learned so far, you can directly do the following, maybe even on the basis of the code provided here:
speed up simulations (map initial conditions to result, map geometrical design to resulting properties, …)
interpret noisy measurement traces
recognize experimental images
learn resolution enhancement of images
Note that you may want to look into jax-based libraries like flax or haiku. These provide an object-oriented way to define neural networks. Even though the code for the network definition itself is not really shorter than what we did using pure jax, the parameter initialization is automatically taken care of.
Brainstorming
Think about what would be the simplest task in your own research that you could tackle using machine learning with neural networks! Good options are often: speeding up some simulation, solving an inverse problem, or analysing noisy measurement data. Think carefully about what exactly would be the input and the desired output and how you would get the training data.
Advanced topics#
I just mention a few keywords so you can look them up and learn about them. These lead to the forefront of current machine learning research.
use various versions of “autoencoders” to learn good compressed representations of unlabeled data
use “residual networks” and “U-Nets” for advanced image processing
use “recurrent networks” to analyze time series
use “graph neural networks” for input like molecular structures and other data of variable size
use “reinforcement learning” to discover control strategies
use “normalizing flows” or “diffusion models” to generate new data similar to existing data
use “transformers” to analyze and produce sequences with long dependencies
Two useful lecture series#
Over the years I have developed two courses dealing with machine learning for an audience of physicists. They go into much more detail than these three lessons.
The basics: “Machine Learning for Physicists”, first delivered in 2017. A hands-on introduction. See the machine learning lecture wiki 2024 for the latest incarnation, using jax, and video recordings of the 2024 lectures, or also the older 2019 video recordings on YouTube.
Advanced topics: “Advanced Machine Learning for Physics, Science, and Artificial Scientific Discovery” in 2021/22. See the advanced machine learning lecture wiki and the advanced machine learning video recordings on YouTube. This contains advanced concepts like transformers and also a lot of the statistical and information-theoretic concepts needed to understand machine learning methods.
Feedback and Outlook#
Feedback
If you have feedback or suggestions or notice a little bug in this online book, please use the github octocat icon at the top of the page to “open an issue” and leave a note. If you like this book, spread the word and pass along the link via social media or hand it to your friends.
Outlook
Have fun and start applying what you learned here! Maybe in the end you can also contribute to some of the long-term visions we like to think about in my group, like building an ‘artificial scientist’ that comes up with new hypotheses and is able to independently understand the world.
Visit the homepage of the theory division at the Max Planck Institute for the Science of Light. We regularly have postdoc and PhD positions available for physicists and mathematicians and computer scientists who like to work on either artificial scientific discovery or neuromorphic computing (designing novel learning machines).