I’m a researcher at the intersection of software engineering and applied artificial intelligence. I work at RISE Research Institutes of Sweden, and I teach at the Lund University as an adjunct lecturer. My first engineering job was in process automation. I returned to the university after a few years to pursue a PhD in software engineering. My primary research interests are related to software engineering for systems that rely on machine learning. Many projects right now deal with requirements engineering and testing of critical systems that must constitute “trustworthy AI,” e.g., automotive systems.
It is an introductory software engineering course that acts as a gateway to more advanced elective SE courses. The curriculum includes brief introductions to development processes, requirements engineering, software design, and software testing. Students practice programming in an IDE, git, test automation, writing technical documentation, formal inspections, developing a business plan, and working in a team. The backbone of the course is a small end-to-end development project done in groups of six students. Each group develops a robot for the programming game Robocode, and we host a tournament in the end.
The twist is that no group is allowed to field their own robot in the tournament. Instead, groups offer their robots on the “open market” and purchase other robots to compose a successful team. This sets up a market-driven development context that is more interesting than the bespoke development context that we typically see in university courses. Student groups must try to position themselves on the market and pitch their robot. In the end, we give three different awards. Student groups can 1) win the tournament, 2) end up as the most profitable group on the market, and 3) develop the robot that ended up as the most valuable during the tournament. The gamification leads to student engagement. I’ve written more about it in a paper.
Software engineering is a creative human activity. The biggest challenges boil down to human factors. Technology is complex too, of course, but the biggest challenges are related to aligning the involved engineers – both in time and space. We’re getting better at aligning distributed developers. The time dimension, however, remains tricky. Software systems evolve for years, perhaps decades. Humans don’t remember what we had for dinner three days ago. We need tools supporting collaboration, communication, and comprehension. When we support these three C-words, the technical challenges will find their solutions.
The course is a typical 101 course, a first encounter of software engineering. The students have taken some programming courses before but never worked together in teams. We work in small teams of six students for two months – but this small setting is still enough to experience some of the human issues. If you never contributed to a shared code repository before, this is a big step. Prior to this course, the students have done some pair programming and joint lab exercises. A majority of the students have never used git before.
The approach in the course is that we expose the student teams to a small-scale software engineering project. We discuss what is going on, what the main challenges tend to be, and how to mitigate them. When specific issues pop up, we illuminate them and try to maximize learning. All this requires continuous monitoring of the development process. We set up a continuous integration environment for the student teams and do our best to stay on top of the activity. We encourage the students to communicate using Slack. This helps the supervisors to follow the discussions. This has worked particularly well during the pandemic actually. Most work is done by distributed teams of students, and more communication happens in Slack channels.
We use CodeScene for gamification and visualization. First, CodeScene contributes nicely to the gamification approach we rely on in the course. We use CodeScene’s code health badge on our CI dashboard to encourage the students to maintain high-quality source code. Getting such an explicit assessment motivates the students to investigate code smells and refactoring accordingly. Second, we use the visualizations provided by CodeScene during supervision meetings. Visualizations of hotspots and code health are intuitive. I like them as a backdrop when meeting with student teams. When both supervisors and students rest their eyes on the same visualizations, it is easier to guide discussions around the source code repository. I find that the importance of source code visualization in teaching has been amplified during all the screen-sharing sessions we’ve had throughout the pandemic.
We use a bunch of tools in the course to support the students’ Java development. While many tools are great, I also don’t want to overwhelm the students. I prefer sticking to a set that I really like. Teams maintain their code on GitHub and we use GitHub Actions for CI together with Gradle and the JaCoCo Java Code Coverage Library. The test results are sent to CodeClimate. We also use the static code analysis provided by CodeClimate supported by SonarJava. We also use FindBugs during our lab sessions. CodeScene goes beyond the other static analysis tools by considering more than just the source code instructions. I’m very pleased with the toolbox used in the course at the moment – the students get a good feel of what can be done with tool support.