Johannes Wienke

Head shot of Johannes Wienke

I am a software architect and software engineer focusing on the architecture and accompanying software development processes for complex distributed systems such as microservice architectures. Besides my professional work, I actively contribute to the open source community.

Testing OpenAPI specification compliance in Quarkus with Prism

Quarkus is a good framework choice when developing RESTful APIs in an API-first manner by using the OpenAPI generator to generate models and API interfaces. That way, larger parts of compliance with the API specification are already handled by the compiler. However, not every inconsistency can be detected this way. In this post, I demonstrate how to integrate Stoplight’s Prism proxy into the test infrastructure of a Quarkus Kotlin project for validating OpenAPI specification compliance automatically as part of the API tests. Table of contentsMotivationValidating OpenAPI compliance with PrismIntegrating Prism into QuarkusImplementing a Quarkus test resourceRedirecting test requests through the test resourceDetecting and fixing a bug in the example projectUsing the test resource in integration testsTrading confidence for test execution timeSummary MotivationIn API-first development with Quarkus and Kotlin I have shown a basic setup for Quarkus to support API-first development. As a short recap, API-first development is an approach of developing (RESTful) APIs, where a formal specification of the intended API (changes) is created before implementing the API provider or consumers. That way, we can make use of the specification for code generation, parallel development, and for verification purposes. A common issue seen when using APIs based on their documentation is that the actual API implementation differs from what is documented. The setup shown before prevents larger parts of this problem by leveraging code generation through the OpenAPI Generator. Moreover, the compiler catches a few error cases when generated interfaces are not implemented properly. However, not every aspect of API compliance is validated this way and we would still be able to provide an implementation that deviates from the specification. Here’s a short example demonstrating one of the more obvious ways we can still deviate: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 # ... paths: /pets: get: summary: List all pets operationId: listPets responses: '200': description: An array of pets content: application/json: schema: $ref: "#/components/schemas/Pets" # ... components: schemas: PetId: type: integer format: int64 Pet: type: object required: - id - name properties: id: $ref: "#/components/schemas/PetId" name: type: string Pets: type: array items: $ref: "#/components/schemas/Pet" ...

December 4, 2024 · updated December 6, 2024 · 16 min

API-first development with Quarkus and Kotlin

API-first development is a way of developing distributed systems that makes the API specifications of the system components a first-class citizen in the development process. This approach promises to better control the challenges of loosely coupled components communicating via inter-process communication so that the benefits of separating a monolith into individual components pay off sooner. In this post, I will give a brief introduction on what API-first development is and how my preferred JVM-based setup for developing microservices using the API-first principles looks like at the moment. Table of contentsWhat is API-first developmentAPI-first development for REST APIs with OpenAPIAPI-first and code-first development with OpenAPISetting up an API-first development workflow in QuarkusAdd the API definition to the projectEnable SwaggerUIOnly serve the defined API in SwaggerUILet Quarkus determine the server in the OpenAPI specificationGenerate API stubs using the OpenAPI GeneratorImplementing a generated API resourceSummaryBibliography What is API-first developmentIn a distrusted system, such as a microservice architecture, inter-process communication plays a crucial role. But it comes at a price. Distributed systems are often harder to debug and to refactor. In a monolith, the IDE and the compiler help refactoring programming APIs such as an interface or abstract base class. A mismatch between the API provider and the consumer will usually be caught by the compiler. In a distributed system, usually no compiler helps when changing APIs and we cannot rely on it to find mismatches between consumers and providers. Without care and special testing techniques, errors will only surface at runtime, making the system more brittle and changes more costly. Therefore, it is important that: APIs are well-designed so that breaking or incompatible changes are infrequentif required, breaking or incompatible changes happen in a controlled way that prevents API consumers from suddenly failing without prior notice....

August 16, 2024 · updated December 6, 2024 · 16 min

Tales from the commit log

A piece of software never exists in isolation and without context. The problem to solve and the reasoning and experience of its programmers shape the implementation and are important information for future developers. Even well-written code cannot convey all of this by itself. Therefore, further means of communicating such information are required. Code comments and external documentation are the obvious choices. However, with Git we also have the chance to design the commit log in a way that it deliberately tells a story about the implementation that is not visible from the code itself. Table of contentsWhy code and change sets need to communicateRecording history vs. telling a storyProperties of good commitsCreate cohesive commitsExplain what has been done and WHYCreating a clean commit historyBibliography Why code and change sets need to communicateProgramming is a form of human communication, mostly with other humans; incidentally also with the computer by instructing it to execute a function for us. Therefore, when implementing something we carefully need to consider how to communicate what and how things are done, similar to when we talk to each other about a manual task using natural language. We need to negotiate why and how we do things to ensure that everyone understand what to do and in which way. Otherwise, misinterpretation, confusion, and conflicts are pre-programmed. With current software development techniques such as pull/merge requests, the phrase that code is read much more often than it is written or changed is probably even more valid than it ever used to be. Reading only makes fun and delivers the required insights if the literature you are reading is well written and not a convoluted mess of illogical mysteries. Similarly, reviewing a pull request only works well if the proposed changes are presented in a way that is understandable. Clean and self-documenting code is an important technique to ensure a reasonable reading experience. However, the code itself mostly explains what and how something is realized. The really important pieces of information usually hide behind "why" questions. Why did I use this design over the more obvious one? Why this algorithm and not the other one? Why is this special case needed to fulfill our business requirements? Why do we need to make this change at all? These are actually the important pieces of information that will likely cause confusion sooner or later if omitted. Bugs might be introduced in future refactorings if business requirements and their special cases are not known to someone reworking the code. New features might break the intended design if it wasn’t clearly presented. Finally, a pull request review is much more productive and also more pleasing for the reviewer if requirements, design choices, and motivations are known. Code comments can answer many of these issues if done properly and much has been written about how to create useful comments (e.g. [Atwood2006], [McConnell2004] chapter 32, [Ousterhout2018]). Good and concise recommendations are: Comments augment the code by providing information at a different level of detail. Some comments provide information at a lower, more detailed, level than the code; these comments add precision by clarifying the exact meaning of the code. Other comments provide information at a higher, more abstract, level than the code; these comments offer intuition, such as the reasoning behind the code, or a simpler and more abstract way of thinking about the code. — [Ousterhout2018]...

November 5, 2021 · updated November 5, 2021 · 11 min

On writing useful (unit) tests

Throughout the years I have seen a lot of people struggling with writing useful and readable test cases. And I have also seen a lot of existing test code that has more resemblance with a bowl of spaghetti than it helps ensuring software quality. While some say that writing tests at all is better than not having automated tests, well-structured test code is vital to achieving most of the benefits claimed by the TDD community. As structuring tests seems to be a real problem for many, this post collects some personal advices on how to create a clean test code base that helps beyond technically checking program correctness. Table of contentsThe benefits of well-structured testsTests are a means of communicationTests as a tool for debuggingPreconditions for good testsAbstractionDependency injectionGuidelines for writing test casesVerify one aspect per test caseUse test case names to express readable requirementsTest your business, not someone else’sClarify requirements with syntax and features, don’t dilute themWhat to test and what not to testHow to provide test doubles: stubs and mocksConclusionBibliography The benefits of well-structured testsWhy does the structure of test code actually matter? Why should one bother with achieving clean test cases if a convoluted test function in the end verifies the same aspects of the code? There are (at least) two good reasons that explain why investing work in the structure of test cases is important. Note: This blog post focuses on techniques regarding tests that are written in a typical general-purpose programming language with tools such as xUnit-like frameworks. This usually isn’t the case only for unit tests. Also higher levels in the testing pyramid are often realized this way and the general recommendations given here are also applicable on this level. I do not specifically address tests realized using other, more declarative formalisms such as BDD-style testing. Still, some things probably apply there as well. Tests are a means of communicationAlthough reliably verifying that code performs and continues to perform the intended function is probably the primary reason for writing automated tests, well-structured tests can serve more purposes for developers, most of them boiling down to communication. The disputable Uncle Bob Martin has coined a famous quote in this regard: Indeed, the ratio of time spent reading versus writing is well over 10:1. We are constantly reading old code as part of the effort to write new code. …​ so making it easy to read makes it easier to write. — [Martin2009] p. 14...

August 12, 2021 · updated September 12, 2021 · 26 min

Debugging sporadic connectivity issues of Docker containers

At work we have started to set up new continuous integration servers. We have decided to build the whole setup based on into individual Jenkins instance managed via Docker. Moreover, most build slaves on the instances are dynamically-created Docker containers themselves. To spawn up these slaves, the Jenkins masters need write access to the Docker socket. Of course, this would be a security implication if they had access to the socket of the main Docker daemon on the host that operates all services, including the Jenkins instances themselves. Thus, we added a second daemon to the host just for the purpose of executing the volatile build slaves. However, we soon noticed that containers executed on this additional daemon frequently showed DNS resolution errors. The remainder of this post will explain the details of how we tried to track down this problem with all the ugly details being involved there. ...

September 18, 2018 · updated April 30, 2021 · 9 min

autosuspend 2.0: Additions for Waking Up a System

Since a few years I am maintaining autosuspend, a small daemon that automatically suspends a Linux system in case no activity is detected. With version 2.0 I have added support for scheduled wake ups based on configurable checks for pending activities. ...

July 29, 2018 · updated April 30, 2021 · 2 min

LaTeX Best Practices: Lessons Learned from Writing a PhD Thesis

A few weeks ago I submitted my PhD thesis, which I have written in LaTeX. LaTeX is probably the best and most established open-source type setting solution for academical purposes, but it is also a relic from ancient times. The syntax is a more or less a nightmare, you need to remember many small things to create typographically correct documents, the compilation process is a mess, and an enormous amount of packages for all kinds of things exists and you need to know which of the packages is currently the best alternative for your needs. For the quite sizable document that I have written, I took a reasonable amount of time to find out how to do all these things properly and this blog post will summarize my (subjective) set of best practices that I have applied for my document. This is not a complete introduction to LaTeX but rather a somewhat structured list of thing to (not) do and packages (not) to use. So you need to know at least the basics of LaTeX. ...

June 12, 2018 · updated May 2, 2021 · 29 min

Coding Python in Neovim with IPython as a REPL

Most of the time at work I am currently doing machine learning / data science using the Python ecosystem. My editor of choice for working in Python has become Neovim, which really works well for autocompletion and linting based on Neomake, UltiSnips, deoplete and deoplete-jedi. However, one thing I have been missing was a tight integration with the IPython / Jupyter Console REPL in order to quickly experiment with new code fragments in a fashion like SLIME for Emacs: simply select a few lines of code and send them to IPython using a command / binding. Finally, I have found something that works well, which I will explain here. ...

March 15, 2017 · updated April 30, 2021 · 3 min

Android: manually restoring apps from a TWRP backup

Since an issue with the keyboard freezing for Android disk encryption was fixed in the Omnirom code base, I was able to upgrade my system again. Unfortunately, the system didn’t boot anymore without a factory reset. Therefore, I finally found out to manually restore individual apps from a system backup performed with TWRP. ...

July 30, 2016 · updated April 30, 2021 · 2 min

Linux-based Home Entertainment System

Since two or three years I am experimenting with different solutions for my personal home entertainment system. Now that the solution has become quite stable I am going to describe my current setup and its evolution in this blog post hoping that someone might find it useful. ...

May 20, 2016 · updated April 30, 2021 · 4 min