For professors, grading is grunt work, and mostly done manually. As such, it's usually limited to the things that can be checked by hand. This eventually results in a very limited understanding of the learning process of students, lengthy grading process and bad learning outcomes for students. The first step is understanding grading as part of the learning process for students: it's simply a chance to give feedback to students so that they can fulfill their learning objectives. This needs to be done frequently and meaningfully. The second step is decoupling grading from your personal appreciation and setting it up as a functional requirement. Joining both things leads to automating grading as much as possible. While I had been using GitHub for turning in assignments for quite some time, I really started using CI pipelines in 2017, initially with Travis, and adding GitHub actions in 2019 when they started to admit people into the beta. They are used alongside each other now, making assignments undergo different kinds of tests, from (mild) plagiarism to, lately, setting up the student environment in a Docker container and run things inside it. The main idea of this talk will be how to pass from concept to (software) test, how the different tests have been set up, and what kind of obstacles have been found along the way. And, of course, if this has resulted in better learning outcomes for students.