Cape of Good Code Team

Allgäu Digital: Quality Analysis in Software Development

Allgäu Digital: Quality Analysis in Software Development

Based on an interview with us in December 2019  Ronja Hartmann wrote an article for Allgäu Digital about Cape of Good Code . This is our translation of the original article in German


Egon Wuchner and Konstantin Sokolov are convinced that a common language is needed for software engineers and management to combine digital developments and business development.

Good code is not an end in itself; it reduces effort, errors and risks in all digital projects.

With 20 years of experience at Siemens Corporate Technology, an internal research and consulting department of Siemens AG, Dipl.-Math. Egon Wuchner knows what it takes to make money for companies with their software-based solutions. As with any investment, the choice of the most efficient and risk-free means is crucial. Since 2011, he has been working on what good software development actually is and constitutes. 

Since 2013, he has been working together with Dipl.-Ing. (Techn. Informatik) Konstantin Sokolov to define measurable criteria and develop appropriate analysis methods. And to make them visible: Where does high software or code quality contribute to the functionality of solutions, where does poor code quality create hurdles, eat up resources and cause errors.

Egon Wuchner is the founder of Cape of Good Code

Egon Wuchner is the founder of “Cape of Good Code”., © Allgäu GmbH, Tobias Hertle

Developers understand the problem when code “grows” through changes and its quality deteriorates, but find it difficult to communicate

as Wuchner describes the difficulty. “It’s more like a kind of gut feeling that you should work on the software architecture here and there, because editing is becoming increasingly difficult.” Even with sound figures, the problem remains too abstract and theoretical for people without a technical background.

However, as digitalization progresses, companies and industries that traditionally have no experience with IT also need a basis for strategically sound decisions on digital issues. 

Wuchner and Sokolov therefore decide to break down the measurement results to the individual functionalities, i.e. the specific features that the company wants to offer, sell and earn money with. In this way, it is possible to determine a relationship between the benefit of a feature and its cost, which includes the condition of the code or its resilience for further changes and also follow-up costs, so-called technical debt of software development. 

When it becomes clear that the topic has now expanded into a complex research project and will not be continued as a spin-off from Siemens, the two boldly found their own company. “We simply did it ourselves,” says Wuchner.

“It didn’t take a big decision,” adds Sokolov. He is already at Siemens before he meets Wuchner; he is doing an internship, writing his thesis and has been working for the company as a freelancer since 2011. When he is assigned to the project by Wuchner, software quality and automated analyses are new to him. “While working on the topic, I quickly saw the potential and we put a lot of work into it.” 

So from August 2018, they’ll build their solution from scratch and implement completely new approaches in the process.  

We provide the quickly comprehensible database for corporate decisions on digital topics.

More and more entrepreneurial developments and strategies include digital aspects and components. For their implementation, software is needed to offer customers certain functions. New solutions have to be developed or existing ones extended. To enable cooperation between companies, interfaces, data synchronisation and the like are necessary. All this is based on code. Text that defines which exact steps the system should process, calculate or analyse. 

Depending on how this code is structured, it may be more difficult or easier for companies to make adjustments and add new functions, and thus more or less resource-intensive. 

Good code is structured to carry new features without causing unexpected errors, downtime, maintenance and costs. It reduces the time a developer needs to understand it by providing clean documentation. And the distribution of knowledge about the code or the development is without risks. 

Laptop with website of Cape of Good Code

© Allgäu GmbH, Tobias Hertle

Good code withstands changes well – Feature Quality Analysis

The analysis done by Cape of Good Code determines where the code structure may be so overloaded that the addition of new features increases the susceptibility to errors. After all, the code, the software architecture is never finished, but must be designed for changes and additions. “The market is highly variable and customer requirements are unclear,” says Wuchner, describing the situation. “We don’t know which features will succeed and possibly have a differentiating effect in the market and thus develop into a competitive advantage. At the same time, companies have to be innovative, offer different features, experiment and, for example, use A/B tests to determine usage figures”.

But even with high usage it makes sense to determine whether the effort is still justifiable. Here, too, the Pareto rule shows itself to be true – 20% of features account for 80% of the effort. 

Good code is well documented – process quality analysis

For software developers, the topic is nothing new. Anyone who deals with code on a daily basis and spends 50 – 80 % of their time reading code and understanding code knows the problem: The worse the documentation, the more effort is required.  

“Nevertheless, not enough is done,” knows Sokolov and explains: “It’s a vicious circle: you neglect the documentation, so it doesn’t add any value. You see no added value, that’s why you don’t do documentation.” “But,” he adds, “you can learn that and it’s worth it.” 

Best practices in development are close to Sokolov’s heart. In his opinion, just one day of process quality training on common practices and procedures helps almost every company. And is the basis for meaningful evaluations of the code quality itself. This includes: Good ticket descriptions (the issues to be processed), good commit messages (what was done and why), good documentation (the changes in detail). 

Knowledge of the code is well distributed – Knowledge analysis

Knowledge that accumulates in so-called islands of knowledge among individuals, knowledge clutters that nobody cares about, are obvious risks and increase the effort for code changes. The same is true if only one coordinator takes care of certain areas. Based on the history of the code, Cape of Good Code also analyses the balanced distribution of knowledge – anonymously, of course, in accordance with applicable data protection regulations.

Unlike other analysis tools on the market, we prioritize the code problems that are also problems on business-level.

On the basis of cost-benefit considerations, priorities are set across all three areas as to where action should be taken first. Because optimization can always be done and there is no finished status in software development. But it is also true: The later you approach it, the higher the maintenance costs. This is called technical debt. But the “interest” also arises because feature development then becomes more complex and more expensive.

Companies from industry that already have their own software solutions in their portfolio to supplement their traditional products and services, specialized software providers, SMEs that want to push digitization and check the quality of solutions developed for them, but also companies that want to acquire other companies, benefit from these evaluations.

In addition to the quantitative evaluation of code quality and the consequences, the analysis also shows the focal points in a quickly graspable traffic light system – where it’s critical, where it’s still okay – and thus highlights the implications for the company. By offering both in parallel, Cape of Good Code creates a common level at which developers and management can discuss and decide on the issue. 

The Bavarian State Ministry of Economic Affairs and Media, Energy and Technology also sees this as a technological challenge, very innovative and of great benefit, and is supporting the development in Oct 2018 as part of the Bavarian Program for Technology-Oriented Start-ups (BayTOU).

Egon Wuchner is the founder and CEO of Cape of Good Code

Egon Wuchner is the founder and CEO of Cape of Good Code., © Allgäu GmbH, Tobias Hertle

The support from the Bavarian government helped us a lot to overcome the initial hurdles in research and development 

as Wuchner says.

Since the end of 2018 they have been in the business incubator. The start into entrepreneurship was a matter of course for both of them. Wuchner is enthusiastic about the fact that you can realize your own ideas and even earn money with them, and adds that this was already his wish when he started at Siemens. “The thing just had to mature. I needed the right people and the right subject.” His time at Siemens gave him financial reserves and he is happy that Sokolov took the risk with two small children. He replies: “I had nothing to lose. Unlike you, who gave up a career.“

The two complement each other perfectly in their fields of activity, Sokolov is mainly concerned with development, and he has developed the analysis engine to a large extent. Wuchner sees his long-term focus in consulting, but currently also deals with experimental research and customer acquisition.

In the meantime, they have another member of staff on board, who takes care of the visualisations and thus creates the comprehensible processing of the data for management decisions.

We have plenty of ideas!

In the future, they would like to have more employees, mainly for development. By the end of the year, they would like to offer parts of the analysis tool also as a cloud-based product and offer customization. It would also be conceivable to start a further research project to find out what the best approach is to address the problems at the identified focal points in the code. “We have enough ideas,” laughs Wuchner.

Only Wuchner lives in Allgäu and the GmbH is based here. Sokolov is based in Düsseldorf, the third one in Leipzig. Allgäu Digital and the business incubator have given them a lot of support in all entrepreneurial questions, especially in the beginning. In addition to the coaching offers for the founders, the exchange with others is especially important. 

“Our customers are not necessarily in the Allgäu, but that can still change, our offer is definitely relevant for the companies here”, smiles Wuchner. 

Posted by Cape of Good Code Team in Press About Us, 0 comments
JavaSPEKTRUM 01/2020: Interviewing the founders of Cape Of Good Code

JavaSPEKTRUM 01/2020: Interviewing the founders of Cape Of Good Code

We have been interviewed by the chief editor of the JavaSPEKTRUM magazine Prof. Dr. Michael Stal. The interview has been published on 31 January 2020 in the 01/2020 edition. You can download the PDF of the original article in German


The German company Cape Of Good Code was founded not more than a year ago. It deals with the analysis of software architectures and proves that it is also possible to commercially implement innovative ideas in this country. JavaSPEKTRUM talked to the two founders Egon Wuchner and Konstantin Sokolov.

JavaSPEKTRUM: First of all, I would like to establish the context. Why and when did you start your business?

We founded in August 2018. Prior to that, we spent several years working on software quality analysis at Siemens Corporate Technology. We were dissatisfied with the status quo of the analyses, especially with the significance of the results, the lack of actionable knowledge. So we designed our own analyses and applied them at Siemens. After some time, we saw the further potential of this type of analysis, but we couldn’t acquire additional budget for it in the company. So we decided to make everything new, different, better, more scalable and at our own risk.

JavaSPEKTRUM: Who works in your company and what is your short-, medium- and long-term vision? Where do you want to go?

There are three of us and we will be hiring more employees in 2020. Our vision for 2019 was to stand on our own feet with the first booths of our tool by means of consulting. We have achieved this.

One of our slogans was: “everything you wanted to know about your code, but were afraid to ask”. We didn’t know how true it would be that developers and managers didn’t dare to ask or linger in the comfortable belief that they knew the essentials about their code and its quality. The manager believes that the CTO knows, the CTO believes that the architect knows, etc. So we had not anticipated how difficult it would be for us to acquire our first customers. Because with our analyses, we are indirectly stepping on the feet of all project participants. We do not make people happy with it.

With our solutions we want to contribute to all stakeholders working better together and less against each other

This is exactly different with our IT/SW due diligence projects. Buyers want a valuation of the software assets and are very happy about it. 

In the medium term up to 2021, we want to offer our tool chain gradually as SaaS/On-premise, so that all business and software stakeholders can make better decisions regarding the use, effort and quality of features in the code. With our solutions we want to contribute to all stakeholders working better together and less against each other.

Our distant goal is a symbiosis of AI and human intelligence: development remains a cognitive achievement, but our analyses and forecasts should provide decisive clues, even to the extent that it leads to automated suggestions for improvement, i.e. the synthesis of new and proposed code.

JavaSPEKTRUM: Software architectures are the most important component in software development projects. Accordingly, there are already some tools for architecture analysis. What do you do differently than the others?

Current analysis tools examine certain best practices and code smells at the lowest level, as well as metrics that have been proven to be of little value, such as cyclomatic complexity. The architecture aspect is limited to dependency analysis. In both cases, too many undifferentiated and often productivity-irrelevant findings are produced. What is the point of, say, improving a module at utility level, even though it has not been changed for two years and otherwise works well?

Gut feeling is not a good advisor

The one about the utility level was a trivial example. But it is a fact that gut feeling is not a good guide when it comes to deciding which modules are important for productivity and which are not.

It’s also hard to determine that just by evaluating the current code. The best way to figure it out is to observe how an architecture “behaves” over time. After all, an architecture is always just an attempt at a solution – is difficult to determine how good it is a priori. The crucial question is how easy it is to add new features to software, whether with a reasonable amount of cognitive effort and without new errors. It is precisely this aspect that we are missing from the current architecture analyses. But this requires more data sources than just the code.

With our analysis tool DETANGLE we analyze the history of the code in the code repository. We establish the connection to the issue tracker, where functionality and bugs are captured as tickets. We do not measure code modularity, but feature modularity, i.e. how implemented features are coupled to each other in code, or how the cohesion of a feature looks like across the code. We speak of feature quality debt.

JavaSPEKTRUM: Your product is called DETANGLE. Can I imagine that your tool makes “hidden” architectural information visible?

Yes, we help to untangle so that the business-relevant features are subjected to a DETANGLE, a decoupling.

We make the modularity of the code in terms of features visible qualitatively by means of visualizations and quantitatively by means of metrics. Which parts of the code have to be touched again and again to add new features? This information across all features and the code base is not visible to humans and is not made visible by other tools. Measuring this information requires a new type of metrics.

The fact that decoupling is worthwhile and has a positive effect on feature throughput can be understood as follows: Various studies show that developers spend 50-80% of their time reading/understanding code. 

This brings to mind an excerpt from Robert Martin’s book (Fig. 1), where he describes the typical coding process. The environment and implications of new code must be understood. Time and unwanted side effects can be saved significantly if the cognitive effort to understand the code can be reduced. This is the case when a module has clearly defined responsibilities and contributes only to a few related features.

Developers spend 50%-80% of their time understanding code

Fig. 1: Excerpt from „Clean Code“ by Robert Martin

By the way, in order to take the reading part into account when writing code, we have also introduced a separate effort measurement based on the changes in the code: for the whole system, for modules or per feature/bug. Even deleted code means effort, because you first have to understand what you are deleting.

JavaSPEKTRUM: How would you describe DETANGLE in one or two sentences?

DETANGLE is an analysis and AI-based prediction of success-critical data based on the business-relevance of the features (their development effort, error density and quality in the code) instead of pure code analysis or subjective estimates of developers, managers and customers.

JavaSPEKTRUM: What exactly does it do? What information and views does it provide the user? In which process steps of the development is it used and how?

We have views and boards. Views provide an overview, while boards display evaluations as decision-making aids.

For example, views provide information about the size of the system, the amount of development work done, the distribution of these values over modules, features and bugs and their progression over time. Thus, for example, one can see whether the effort required for features is greater than that required for bug-fixing. Also for refactorings it can be seen when these efforts were made and if they led to e.g. lower bug-fixing efforts. The decisive question is: is the effort for feature development on average greater than for improvements like refactorings and bug-fixing.

The boards measure feature quality debt and the impact of bugs. They show for which modules high quality debt are incurred and whether these have reached critical threshold values. It can be seen whether quality debt not only limits the extensibility with new features, but already leads to more bugs.

Another board compares an effort estimate for the improvement of critical modules with the predicted maintenance effort for expected bug-fixing. 

There are several process steps for the use of DETANGLE. At each release planning, when improvement measures are planned or at the end of iteration for monitoring purposes for architects. Even as part of CI, DETANGLE can be triggered with each commit.

JavaSPEKTRUM: What types of users does DETANGLE address – architects, developers or even decision makers?

DETANGLE addresses architects and decision makers. Developers are more interested in results of code analysis to detect bad smells and programming errors.

Decision makers and customers understand little about code. But they have to decide what the budget will be used for. Talking about features in the customer’s domain helps a lot. A manager/customer can easily follow when talking about features. And he can follow the values of feature modularity without understanding the concept in detail. He can use figures on the effort and quality of features, data-based effort estimates and maintenance forecasts to make further decisions.

The aim is to use the available budget as effectively as possible

JavaSPEKTRUM: On your website you describe as advantages improved quality, insights about observation or monitoring and better planning. Could you please explain how DETANGLE achieves these goals?

As mentioned, we monitor the effort per feature. We can identify when the effort for bug-fixing exceeds the effort for features. Furthermore, the Pareto rule applies: 80% of the effort is spent on 20% of the features. So it is worthwhile to take a closer look at this 20%. Is further effort worthwhile at all on these features? 

Budget for quality improvements will always be quite limited. With DETANGLE it is possible to spend this budget on improving the code that contributes to business-relevant features. The point is to use the available budget as effectively as possible.

We support the planning of these improvements vs. other features by estimating the effort for the improvement and making a maintenance forecast that can be used for better decision making.

By means of high feature quality debt it is possible to predict subsequent errors and predict the upcoming maintenance effort in form of bug fixes

JavaSPEKTRUM: You have also created a number of white papers, for example, on the conflict between features and quality, and you have given users the tools to make decisions more easily, such as which features should and should not be included in a product. Can you explain how DETANGLE helps with these decisions?

We measure with DETANGLE among other things feature quality debt. Based on this, customers, managers and developers can decide together if and where quality needs to be improved along the business value of the features.

In case of demanded features with high debt, the feature modularity has to be improved, because extensions of these features can be expected. Or if the code where I will add new features has high feature quality debt, it is only worth improving there. If the values in the code area are not yet critical, you can add more features.

JavaSPEKTRUM: DETANGLE promises support for ensuring functional safety. How exactly do you achieve this? Through traceability?

This is an aspect that we have not yet been able to pursue in the necessary depth, but where we see an application field for DETANGLE. After all, it does more than just traceability of functionality. Feature coupling could also be used to measure the absence of side-effects of safety critical requirements. This would be the initial thesis that could be investigated in a research project together with TÜV and a company in the industry.

JavaSPEKTRUM: DETANGLE is also intended to help manage risks. What do you mean by this and how does your tool help me?

Besides features and quality, there are other key players in software projects, namely people – and their knowledge. It is about the risks of knowledge distribution in teams. Where are there islands of knowledge, where are there knowledge “balls of mud”? Who are the coordinators in the team (key people)? What about the knowledge ramp-up for new employees? Ideally, new employees should start with bug-fixing, move on to features, then to refactorings and eventually become coordinators. 

The question regarding knowledge distribution is how many developers should ideally work together directly. If there are too few of them (e.g. only one), then you lose the know-how if they leave the project and have to accept a high training effort. If there are too many of them, then there may be additional effort due to communication overhead

In this context there is the “diffusion of responsibility” phenomenon. If too many people arrive at an accident site at the same time, the probability that targeted help will be provided decreases because everyone assumes that the other person will do it. So if too many developers work on a module, in the end nobody feels responsible for the quality anymore.

It turns out that similar modularity principles apply to developers as to features. The “committer coupling,” is a metric we capture in this context. In addition, we visualize the distribution of knowledge, with which one can already qualitatively recognize a lot.

How many developers should ideally work on a module? An exact number is difficult to give. But our research and other empirical findings show that it is in the range of 2.

So a photo comes to mind that fits in well. It was taken from a talk by Logitech CEO Bracken P. Darrell at the TNW Conference 2019 (Fig. 2). It refers to the successful real and fictitious pairs Jobs/Wozniak and Holmes/Watson.

 Logitech-CEO Bracken P. Darrell at the TNW Conference 2019: Great Things Happen in Teams of 2

Fig. 2: Logitech-CEO Bracken P. Darrell at the TNW Conference 2019

Another use case aims at estimating the risks of losing employees or identifying critical employees in the case of company acquisitions or team restructuring.

JavaSPEKTRUM: Are there typical errors or problems that you encounter with software systems with conspicuous frequency?

The first step would be to focus on process quality. In the meantime, more and more projects record requirements and bugs as tickets and associate commits with a ticket ID. But very few of them record technical work, such as refactorings, as tickets. This makes it impossible to see the effect of this work, e.g. whether there are fewer bugs afterwards.

This technical work can not only have a positive effect. With Django, the web framework, for example, there is the ticket type “Cleanup/Optimization”, under which technical work including refactoring is captured. This work has led to the fact that with 2017 the feature quality debt became lower, i.e. new features are less coupled to each other. Nevertheless, bugs in certain areas of code have hardly decreased. However, this was due to the high degree of coupling between the technical improvements and between the committers. This is surprising: many people are working on improvements at the same time in a slightly chaotic way.

Furthermore, there is sometimes little or no source code documentation with excuses that are far-fetched for us. DETANGLE is already doing an analysis with a view of its own.

And often there are alarmingly few automated tests, instead people waste a lot of time with manual testing before each release.

JavaSPEKTRUM: Can your tool be integrated with other tools? I was thinking about testing tools, modeling tools, development tools, version management, and the whole range of other tools.

Integration with code repositories and issue trackers is given by definition because they are the main sources of information for DETANGLE.

Integration with test tools is useful in several ways. Results of test runs, coverage and mutation testing tools can easily be captured by our analysis engine and fed into our data model. But we don’t want to reveal too much about what we might want to do next.

JavaSPEKTRUM: How does the integration into a DevOps environment work?

DETANGLE must be configured once and can be easily integrated into the CI/CD process. With every nightly build you can trigger the DETANGLE analysis again and update the visualizations or generate notifications.

JavaSPEKTRUM: Do you also support system architectures from multiple fields, such as mechatronics, electronics or IT infrastructures?

No. IT infrastructure would certainly be useful, too…

JavaSPEKTRUM: If a company would decide to use DETANGLE, how exactly does it work until it is ready for operational use?

It takes about two weeks as a consulting service, including set-up, analysis and presentation of the results. The customer receives the analysis results, views and boards via Dockerimage.

JavaSPEKTRUM: Is there a way to test it?

Currently in the form of a Proof of Concepts accompanied by us.

JavaSPEKTRUM: If you look five years into the future like this, what do you want to go for?

DETANGLE should become an “official” seal of quality. E.g. in assessing the quality of the development process or compliance with certain guidelines, such as the KAIT guidelines (capital management supervisory requirements for IT) recently published by the BaFin authority.

There are even more aspects that can be assessed using machine learning: e.g. the quality of commit messages and ticket content – both important criteria for the quality of the development process.

Our point is that in a sea of code one should have the ability to distinguish good from bad

JavaSPEKTRUM: I’m curious. You call your company Cape Of Good Code. How did you come up with that name, anyway?

To be honest, the name was suggested by a friend of Konstantin (Oleg Pomjanski), who helped us design the logo, among other things. We liked the name right away and could identify with it immediately.

So far we have only received positive reactions. Even the notary (when the GmbH was founded) thought the name was “cool”.

For us, identification with the company name is all about the fact that in a sea of code one should have the ability to distinguish good and bad pieces. We offer a point of orientation, like a lighthouse we point our customers in the right direction.

JavaSPEKTRUM: Finally, a completely different question: You were previously employed in companies. Based on your past and present experience, what advice would you give to current and future job starters?

Egon: In Germany, there is the general complaint that we do not think enough business oriented. Nevertheless, we would like to take up the cudgels for “technology”: having dealt deeply with technical aspects gives you the self-confidence to be able to learn everything that comes along.

If you have a promising idea in between, the principle should be to look for companions. And not to take yourself too seriously, but to take others all the more seriously. Because it is only with the diversity of perspectives that a better and sustainable solution can be found.

And always remember: try to make someone happy with your solution.

Konstantin: Don’t be afraid of losing your job. Most of you will live to be 100 and older. The impact of that on your life is negligible. Also, most of the time you only regret what you didn’t do, so if you are in doubt, do it!

Posted by Cape of Good Code Team in Press About Us, 0 comments
SmartProduction Conference – 13.02.2020

SmartProduction Conference – 13.02.2020

Revolutionizing the integration of software and business development

Cape of Good Code  joins the exhibition by presenting its DETANGLE® Analysis Suite:

Digital services and software products can no longer be successfully implemented without integrated software and business development. Cape of Good Code revolutionizes the integration of software and business development by the analysis and AI-based forecasting of success critical data at development and runtime on the business relevant basis of features (their usage, development effort, error density and quality in the code) instead of pure code analysis.

More Information

Posted by Cape of Good Code Team in Events
Talk OOP conference 2020 – 05.02.2020

Talk OOP conference 2020 – 05.02.2020

A balance of features, effort and quality on the way into the unknown

Egon Wuchner und Konstantin Sokolov are giving a talk on the following topic:

Projects going ‘into-the-unknown’ are characterized by an uncertain feature set making R&D hardly planable and software maintainability a high-end goal. How to achieve a high business value of software without wasting a lot of development effort? This talk shows how to create the balance between focussing on the development of business-relevant features and effective quality improvements of these features. Value, effort and code quality has to be measured, monitored and estimated along the axis of features. 

We introduce new concepts and analysis methods to focus all efforts and quality improvements along features. We will talk about feature modularity instead of code modularity only and the concept of feature quality debt. And show how it helps to align business with quality goals, by turning heated subjective discussions into data-driven, fact-based collaboration despite the “unknown”.

We are going to show how managers, product owners, engineers are able to answer the following questions in order to deal with the unknown variables of projects today.

 

  • What is the effort distribution along features/functionality? Is it supposed to be this way? Is the main effort spent on features with high business value? Do current features have a high business value?
  • How  well can you extend existent features? How well can you add new features to your system? Which parts of your system are not well suited? Which of these parts are worth improving at all?
  • Where is it possible to predict upcoming errors and high maintenance efforts based on previous feature implementations? Where is it a necessity to act proactively to avoid exponential maintenance effort later?
  • How can you reliably estimate the effort it takes to address the aforementioned hot spots?
  • What are the risks of your knowledge distribution in your team(s)? Where are the knowledge islands and knowledge tangles (with the latter leading to bugs later)?

 

More Information

Posted by Cape of Good Code Team in Events