JavaSPEKTRUM 01/2020: Interviewing the founders of Cape Of Good Code

JavaSPEKTRUM 01/2020: Interviewing the founders of Cape Of Good Code

We have been interviewed by the chief editor of the JavaSPEKTRUM magazine Prof. Dr. Michael Stal. The interview has been published on 31 January 2020 in the 01/2020 edition. You can download the PDF of the original article in German

The German company Cape Of Good Code was founded not more than a year ago. It deals with the analysis of software architectures and proves that it is also possible to commercially implement innovative ideas in this country. JavaSPEKTRUM talked to the two founders Egon Wuchner and Konstantin Sokolov.

JavaSPEKTRUM: First of all, I would like to establish the context. Why and when did you start your business?

We founded in August 2018. Prior to that, we spent several years working on software quality analysis at Siemens Corporate Technology. We were dissatisfied with the status quo of the analyses, especially with the significance of the results, the lack of actionable knowledge. So we designed our own analyses and applied them at Siemens. After some time, we saw the further potential of this type of analysis, but we couldn’t acquire additional budget for it in the company. So we decided to make everything new, different, better, more scalable and at our own risk.

JavaSPEKTRUM: Who works in your company and what is your short-, medium- and long-term vision? Where do you want to go?

There are three of us and we will be hiring more employees in 2020. Our vision for 2019 was to stand on our own feet with the first booths of our tool by means of consulting. We have achieved this.

One of our slogans was: “everything you wanted to know about your code, but were afraid to ask”. We didn’t know how true it would be that developers and managers didn’t dare to ask or linger in the comfortable belief that they knew the essentials about their code and its quality. The manager believes that the CTO knows, the CTO believes that the architect knows, etc. So we had not anticipated how difficult it would be for us to acquire our first customers. Because with our analyses, we are indirectly stepping on the feet of all project participants. We do not make people happy with it.

With our solutions we want to contribute to all stakeholders working better together and less against each other

This is exactly different with our IT/SW due diligence projects. Buyers want a valuation of the software assets and are very happy about it. 

In the medium term up to 2021, we want to offer our tool chain gradually as SaaS/On-premise, so that all business and software stakeholders can make better decisions regarding the use, effort and quality of features in the code. With our solutions we want to contribute to all stakeholders working better together and less against each other.

Our distant goal is a symbiosis of AI and human intelligence: development remains a cognitive achievement, but our analyses and forecasts should provide decisive clues, even to the extent that it leads to automated suggestions for improvement, i.e. the synthesis of new and proposed code.

JavaSPEKTRUM: Software architectures are the most important component in software development projects. Accordingly, there are already some tools for architecture analysis. What do you do differently than the others?

Current analysis tools examine certain best practices and code smells at the lowest level, as well as metrics that have been proven to be of little value, such as cyclomatic complexity. The architecture aspect is limited to dependency analysis. In both cases, too many undifferentiated and often productivity-irrelevant findings are produced. What is the point of, say, improving a module at utility level, even though it has not been changed for two years and otherwise works well?

Gut feeling is not a good advisor

The one about the utility level was a trivial example. But it is a fact that gut feeling is not a good guide when it comes to deciding which modules are important for productivity and which are not.

It’s also hard to determine that just by evaluating the current code. The best way to figure it out is to observe how an architecture “behaves” over time. After all, an architecture is always just an attempt at a solution – is difficult to determine how good it is a priori. The crucial question is how easy it is to add new features to software, whether with a reasonable amount of cognitive effort and without new errors. It is precisely this aspect that we are missing from the current architecture analyses. But this requires more data sources than just the code.

With our analysis tool DETANGLE we analyze the history of the code in the code repository. We establish the connection to the issue tracker, where functionality and bugs are captured as tickets. We do not measure code modularity, but feature modularity, i.e. how implemented features are coupled to each other in code, or how the cohesion of a feature looks like across the code. We speak of feature quality debt.

JavaSPEKTRUM: Your product is called DETANGLE. Can I imagine that your tool makes “hidden” architectural information visible?

Yes, we help to untangle so that the business-relevant features are subjected to a DETANGLE, a decoupling.

We make the modularity of the code in terms of features visible qualitatively by means of visualizations and quantitatively by means of metrics. Which parts of the code have to be touched again and again to add new features? This information across all features and the code base is not visible to humans and is not made visible by other tools. Measuring this information requires a new type of metrics.

The fact that decoupling is worthwhile and has a positive effect on feature throughput can be understood as follows: Various studies show that developers spend 50-80% of their time reading/understanding code. 

This brings to mind an excerpt from Robert Martin’s book (Fig. 1), where he describes the typical coding process. The environment and implications of new code must be understood. Time and unwanted side effects can be saved significantly if the cognitive effort to understand the code can be reduced. This is the case when a module has clearly defined responsibilities and contributes only to a few related features.

Developers spend 50%-80% of their time understanding code

Fig. 1: Excerpt from „Clean Code“ by Robert Martin

By the way, in order to take the reading part into account when writing code, we have also introduced a separate effort measurement based on the changes in the code: for the whole system, for modules or per feature/bug. Even deleted code means effort, because you first have to understand what you are deleting.

JavaSPEKTRUM: How would you describe DETANGLE in one or two sentences?

DETANGLE is an analysis and AI-based prediction of success-critical data based on the business-relevance of the features (their development effort, error density and quality in the code) instead of pure code analysis or subjective estimates of developers, managers and customers.

JavaSPEKTRUM: What exactly does it do? What information and views does it provide the user? In which process steps of the development is it used and how?

We have views and boards. Views provide an overview, while boards display evaluations as decision-making aids.

For example, views provide information about the size of the system, the amount of development work done, the distribution of these values over modules, features and bugs and their progression over time. Thus, for example, one can see whether the effort required for features is greater than that required for bug-fixing. Also for refactorings it can be seen when these efforts were made and if they led to e.g. lower bug-fixing efforts. The decisive question is: is the effort for feature development on average greater than for improvements like refactorings and bug-fixing.

The boards measure feature quality debt and the impact of bugs. They show for which modules high quality debt are incurred and whether these have reached critical threshold values. It can be seen whether quality debt not only limits the extensibility with new features, but already leads to more bugs.

Another board compares an effort estimate for the improvement of critical modules with the predicted maintenance effort for expected bug-fixing. 

There are several process steps for the use of DETANGLE. At each release planning, when improvement measures are planned or at the end of iteration for monitoring purposes for architects. Even as part of CI, DETANGLE can be triggered with each commit.

JavaSPEKTRUM: What types of users does DETANGLE address – architects, developers or even decision makers?

DETANGLE addresses architects and decision makers. Developers are more interested in results of code analysis to detect bad smells and programming errors.

Decision makers and customers understand little about code. But they have to decide what the budget will be used for. Talking about features in the customer’s domain helps a lot. A manager/customer can easily follow when talking about features. And he can follow the values of feature modularity without understanding the concept in detail. He can use figures on the effort and quality of features, data-based effort estimates and maintenance forecasts to make further decisions.

The aim is to use the available budget as effectively as possible

JavaSPEKTRUM: On your website you describe as advantages improved quality, insights about observation or monitoring and better planning. Could you please explain how DETANGLE achieves these goals?

As mentioned, we monitor the effort per feature. We can identify when the effort for bug-fixing exceeds the effort for features. Furthermore, the Pareto rule applies: 80% of the effort is spent on 20% of the features. So it is worthwhile to take a closer look at this 20%. Is further effort worthwhile at all on these features? 

Budget for quality improvements will always be quite limited. With DETANGLE it is possible to spend this budget on improving the code that contributes to business-relevant features. The point is to use the available budget as effectively as possible.

We support the planning of these improvements vs. other features by estimating the effort for the improvement and making a maintenance forecast that can be used for better decision making.

By means of high feature quality debt it is possible to predict subsequent errors and predict the upcoming maintenance effort in form of bug fixes

JavaSPEKTRUM: You have also created a number of white papers, for example, on the conflict between features and quality, and you have given users the tools to make decisions more easily, such as which features should and should not be included in a product. Can you explain how DETANGLE helps with these decisions?

We measure with DETANGLE among other things feature quality debt. Based on this, customers, managers and developers can decide together if and where quality needs to be improved along the business value of the features.

In case of demanded features with high debt, the feature modularity has to be improved, because extensions of these features can be expected. Or if the code where I will add new features has high feature quality debt, it is only worth improving there. If the values in the code area are not yet critical, you can add more features.

JavaSPEKTRUM: DETANGLE promises support for ensuring functional safety. How exactly do you achieve this? Through traceability?

This is an aspect that we have not yet been able to pursue in the necessary depth, but where we see an application field for DETANGLE. After all, it does more than just traceability of functionality. Feature coupling could also be used to measure the absence of side-effects of safety critical requirements. This would be the initial thesis that could be investigated in a research project together with TÜV and a company in the industry.

JavaSPEKTRUM: DETANGLE is also intended to help manage risks. What do you mean by this and how does your tool help me?

Besides features and quality, there are other key players in software projects, namely people – and their knowledge. It is about the risks of knowledge distribution in teams. Where are there islands of knowledge, where are there knowledge “balls of mud”? Who are the coordinators in the team (key people)? What about the knowledge ramp-up for new employees? Ideally, new employees should start with bug-fixing, move on to features, then to refactorings and eventually become coordinators. 

The question regarding knowledge distribution is how many developers should ideally work together directly. If there are too few of them (e.g. only one), then you lose the know-how if they leave the project and have to accept a high training effort. If there are too many of them, then there may be additional effort due to communication overhead

In this context there is the “diffusion of responsibility” phenomenon. If too many people arrive at an accident site at the same time, the probability that targeted help will be provided decreases because everyone assumes that the other person will do it. So if too many developers work on a module, in the end nobody feels responsible for the quality anymore.

It turns out that similar modularity principles apply to developers as to features. The “committer coupling,” is a metric we capture in this context. In addition, we visualize the distribution of knowledge, with which one can already qualitatively recognize a lot.

How many developers should ideally work on a module? An exact number is difficult to give. But our research and other empirical findings show that it is in the range of 2.

So a photo comes to mind that fits in well. It was taken from a talk by Logitech CEO Bracken P. Darrell at the TNW Conference 2019 (Fig. 2). It refers to the successful real and fictitious pairs Jobs/Wozniak and Holmes/Watson.

 Logitech-CEO Bracken P. Darrell at the TNW Conference 2019: Great Things Happen in Teams of 2

Fig. 2: Logitech-CEO Bracken P. Darrell at the TNW Conference 2019

Another use case aims at estimating the risks of losing employees or identifying critical employees in the case of company acquisitions or team restructuring.

JavaSPEKTRUM: Are there typical errors or problems that you encounter with software systems with conspicuous frequency?

The first step would be to focus on process quality. In the meantime, more and more projects record requirements and bugs as tickets and associate commits with a ticket ID. But very few of them record technical work, such as refactorings, as tickets. This makes it impossible to see the effect of this work, e.g. whether there are fewer bugs afterwards.

This technical work can not only have a positive effect. With Django, the web framework, for example, there is the ticket type “Cleanup/Optimization”, under which technical work including refactoring is captured. This work has led to the fact that with 2017 the feature quality debt became lower, i.e. new features are less coupled to each other. Nevertheless, bugs in certain areas of code have hardly decreased. However, this was due to the high degree of coupling between the technical improvements and between the committers. This is surprising: many people are working on improvements at the same time in a slightly chaotic way.

Furthermore, there is sometimes little or no source code documentation with excuses that are far-fetched for us. DETANGLE is already doing an analysis with a view of its own.

And often there are alarmingly few automated tests, instead people waste a lot of time with manual testing before each release.

JavaSPEKTRUM: Can your tool be integrated with other tools? I was thinking about testing tools, modeling tools, development tools, version management, and the whole range of other tools.

Integration with code repositories and issue trackers is given by definition because they are the main sources of information for DETANGLE.

Integration with test tools is useful in several ways. Results of test runs, coverage and mutation testing tools can easily be captured by our analysis engine and fed into our data model. But we don’t want to reveal too much about what we might want to do next.

JavaSPEKTRUM: How does the integration into a DevOps environment work?

DETANGLE must be configured once and can be easily integrated into the CI/CD process. With every nightly build you can trigger the DETANGLE analysis again and update the visualizations or generate notifications.

JavaSPEKTRUM: Do you also support system architectures from multiple fields, such as mechatronics, electronics or IT infrastructures?

No. IT infrastructure would certainly be useful, too…

JavaSPEKTRUM: If a company would decide to use DETANGLE, how exactly does it work until it is ready for operational use?

It takes about two weeks as a consulting service, including set-up, analysis and presentation of the results. The customer receives the analysis results, views and boards via Dockerimage.

JavaSPEKTRUM: Is there a way to test it?

Currently in the form of a Proof of Concepts accompanied by us.

JavaSPEKTRUM: If you look five years into the future like this, what do you want to go for?

DETANGLE should become an “official” seal of quality. E.g. in assessing the quality of the development process or compliance with certain guidelines, such as the KAIT guidelines (capital management supervisory requirements for IT) recently published by the BaFin authority.

There are even more aspects that can be assessed using machine learning: e.g. the quality of commit messages and ticket content – both important criteria for the quality of the development process.

Our point is that in a sea of code one should have the ability to distinguish good from bad

JavaSPEKTRUM: I’m curious. You call your company Cape Of Good Code. How did you come up with that name, anyway?

To be honest, the name was suggested by a friend of Konstantin (Oleg Pomjanski), who helped us design the logo, among other things. We liked the name right away and could identify with it immediately.

So far we have only received positive reactions. Even the notary (when the GmbH was founded) thought the name was “cool”.

For us, identification with the company name is all about the fact that in a sea of code one should have the ability to distinguish good and bad pieces. We offer a point of orientation, like a lighthouse we point our customers in the right direction.

JavaSPEKTRUM: Finally, a completely different question: You were previously employed in companies. Based on your past and present experience, what advice would you give to current and future job starters?

Egon: In Germany, there is the general complaint that we do not think enough business oriented. Nevertheless, we would like to take up the cudgels for “technology”: having dealt deeply with technical aspects gives you the self-confidence to be able to learn everything that comes along.

If you have a promising idea in between, the principle should be to look for companions. And not to take yourself too seriously, but to take others all the more seriously. Because it is only with the diversity of perspectives that a better and sustainable solution can be found.

And always remember: try to make someone happy with your solution.

Konstantin: Don’t be afraid of losing your job. Most of you will live to be 100 and older. The impact of that on your life is negligible. Also, most of the time you only regret what you didn’t do, so if you are in doubt, do it!

Leave a Reply