{{announcement.body}}
{{announcement.title}}

Software Group Evolution using Observation and Rules

DZone 's Guide to

Software Group Evolution using Observation and Rules

I attempt to provide guidelines and thought-provoking questions for new and evolving software groups or start-ups.

· Agile Zone ·
Free Resource

Introduction

Evolution involves change, adaptation, learning and improving, risk, successes, and failures. The software industry has experienced scaling-ups and downturns, transformations, and alignments. When an invention becomes an innovation, component technologies blend forming ensembles of technologies that are critical to each other’s success. A plethora of component technologies, inventions, and innovations have influenced software evolution.

Industrial software dynamics include tradition, authority, competition, and thinking anew. How much time do we have until competitors catch up? Challenging traditional models is often necessary for evolving. The traditional industrial model where deployed, operational software is stable in production, implying that less effort is required at that stage concerning pre-production stages, is questionable in several software industries.

Classification, observation, and rules or laws-based software evolution can be traced back to the ’70s in works like [1]. In the evolution of software science into an engineering discipline the authors in [2] suggest: “If software engineering is ever to become a true engineering discipline, testing will form one of the critical pillars on which it will be built. (Perhaps one day, writing code without tests will be considered as professionally irresponsible as constructing a bridge without performing structural analysis.)”.

I attempt to provide guidelines and thought-provoking questions for new and evolving software groups or start-ups. This article could form the basis for further reading and exploration.

A Note on System Complexity

To touch the issue of system complexity at an introductory level, we may consider a system to be complex depending on the difficulty to understand and analyze its behavior. A complex system has emergent properties that can only be evaluated after the complete system is developed. It is non-deterministic since a specific input does not always result in the same output.

The number of relationships and the type of relationships (e.g., static or dynamic) between its components makes it difficult to analyze the system. Its success criteria have a definitive subjective factor. An easy to follow introduction may be found in [3] where three different types of complexity, namely, technical complexity, managerial complexity, and governance complexity are introduced together with the system characteristics that may cause them.

Software Development: An Integration View

Software products can be created by integrating units of code into software components. Integrating components may lead to systems. As a result, a minimum of three levels of testing software products can be identified, namely, unit level, component level, and system level. It may be beneficial to have a clear and group-wide definition of a unit of code, at any level of abstraction. As this definition may evolve with time, you will have to revisit your definitions of components and systems since you will need to test them.

Two options for integration include the big-bang approach and the incremental approach. In the big-bang approach, the system components are integrated and tested altogether at once as a whole. In the incremental approach, interim integration testing occurs between components (or groups of them), before the complete system is tested. The incremental approach to integration makes it easier and cheaper to find bugs and fix them. Cheaper since the earlier a bug is found the cheaper it is to be fixed. 

Easier since a bug found when integrating two components means that one of them or both need a fix whilst a bug found when integrating multiple components may call for more fixes, bearing in mind that the bug detection rate may decrease as the number of components increases. The author in [4] quotes a report from IBM where the cost to fix an error found after product release was four to five times higher than the cost to fix it during design. An interesting figure depicting the relative costs of fixing errors per software development stage can also be found there.

The incremental integration approach is reminiscent of an observation also known as Gall’s law [5] which states that “A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.” To put some software testing flavor in Gall’s observation, we could use “works as expected” or “worked as expected” whenever Gall uses the words “works” or “worked”, respectively.

Evolving from simple to complex (e.g., from units to components and systems) Gall proposes to start by testing a simple system and based on that system, test more complex systems.

Let’s rephrase Gall’s observation as a question. How can we be confident that our complex system works as expected (think of system testing) when we are not confident that its components work as expected (think of unit testing)?

Unit testing is therefore the first building block, in chronological order, for improving our confidence that our software products work as expected. Good unit testing is about making educated decisions of what inputs should be used and what outputs are expected per input. 

Groups of inputs should be identified that have common characteristics and are expected to be processed in the same way by the unit under test. This is also known as partitioning, and once such groups are identified they should be covered by unit tests. Details on possible structures for unit tests, test strategies, and implementation best practices may be found in [6-8].

As multiple units of code are integrated and code components are created, interactions and interfacing between different components should be tested. Assuming that unit tests within components are adequate, the focus should be given to component interfaces. Following Gall’s observation, first, strive for confidence that each component works as expected. Confidence about the complete system may be achieved gradually, based on integrating several components. 

Interfacing issues may only be evident under unusual conditions and careful consideration should be given when deciding the use-cases under investigation. A component bug may only be found when integrated with another specific component. It may be the case that the interaction between specific components is problematic or it is the inclusion of a specific component that causes interaction issues in the entire system.

The way to approach integration testing depends on:

  1. The requirements or user-stories
  2. The software architecture used
  3. The implementation of the architecture by the developers

There are certain characteristics that software product requirements should have to lead to the appropriate architecture, coding, and testing [9], [10]. For groups working on user-stories, the INVEST properties are a good practice [11]. It is your product architecture that will showcase the importance of each component to be tested and possible bottlenecks. Interesting works on software architecture may be found in [12], [13] and [14]. 

The way that the architecture has been implemented in the code is another factor to consider. Best practices in software development may be found in [15-19]. The authors in [20] propose a set of rules for building maintainable software that is promised to be technology independent and programming language, independent. They also present statistics and examples from real-world systems. Two versions of the book exist having examples in Java and C#.

Automation

Automation at multiple levels is essential as it may result in important gains. Automating a system usually implies that it is well understood and reliable. One of the benefits of trying to automate any task or system that is not mentioned enough in the literature is the expertise acquired during the automation effort, irrespective of how successful the automation project was in the end. 

Automating an unreliable system may amplify unreliability whilst automating a system that is not understood to an appropriate level may not result in any significant gains. However, when designed properly, automatic systems may provide an extendable platform that can be applied to more systems. Platforms may be used to find performance metrics about systems and allow discovering unknown system details. Depending on the nature of the tasks to be automated, machines may be less error-prone than humans, run much faster and continuously.

With a small number of machines (physical or virtual), manually collecting logs and deploying software may be affordable. As the number of servers increases, for example, the amount of manual work will also increase. By automating host control and service deployment, adding more hosts should not linearly increase the workload. There may be many services to manage for a small number of hosts which still results in multiple deployments to handle, many services to monitor and a large number of logs to collect.

At a requirements engineering level, the following lifecycle may be insightful.

Requirements --> Specifications --> Tests --> Documentation

As we move from the left to the right, we fine-grain our information for the system(s) that we need to build. Specifications can be considered as fine-grained requirements. Tests and acceptance criteria can be considered as fine-grained specifications. Documentation should consist of fine-grained tests. Documentation is the single source of truth for developers, testers, DevOps, and other personnel. 

The absence of documentation will lead to misunderstandings whilst heavy documentation will be costly to build and maintain. The precision level of requirements may change throughout the lifetime of products. A requirement may start as a vague goal with a high level of uncertainty and as it is implemented into the working code, uncertainty should decrease. After all, you write the working code from your requirements and not requirements from your working code, right?

Tools like Gherkin [21] can automate this lifecycle, bridge the gap between business and technical experts, produce living documentation [22], and create a cohesive whole. The system works when tests are created from every acceptance criterion and tests control the design of the system by validating the system frequently using testing code.

Developer’s productivity can be increased via self-service provisioning of services. They should have access to the same tools used for deployment of production services which may help to ensure that problems are identified early.

Automated testing increases testing speed and gives more time to testers to get involved with mental activities like exploratory testing. The levels of automated testing can be summarized in a testing pyramid [23-25]. Different automation testing levels require different expertise, different tools, and have a different return on investment.

To maximize the benefits from our automation efforts, continuous integration and continuous builds should also be used. Continuous integration is used to improve our confidence that newly checked-in code properly integrates with existing code. A continuous integration server may be used to detect the committed code, check it out, and perform verifications. Continuous builds are used for the production readiness of the release builds. This involves executing automated test-suites from any level of the automation pyramid as a measure of production readiness.

Using technology that enables automation is also vital. This involves the tools used to manage hosts, among others, and the works in [26] and [27] may help to pick such technologies. Site reliability issues and automation options may be found in [28]. Can you deploy the software you have written automatically? How long does it take? Can you deploy database changes without manual intervention? How long does it take from issuing a bug to fixing it, re-deploying, and testing the fix? How does that time interval increase on average as the number of bug fixes increases?

Development Process Basics

Without revenue, uncertainty may be an issue. It is common for new and growing companies to focus on revenue when the processes within the company are not ready yet. The degree of readiness is an important factor, of course, but asking the right questions may guide in the right direction. What is the best way that your teams should communicate and collaborate to improve your products and/or make new products? Towards this end, there exist multiple trends that may fit your needs like test-driven development [7], agile development [29-32], lean development [30-33], and extreme programming [34].

You may also have a look at the processes used in established organizations like Microsoft [35] and Google [28], [36], [37]. As a cautionary note here it should be mentioned that just because an established organization used a specific process to develop software this does not mean that this is the right process for your group. However, such readings could be a good starting point to get creative, experiment, and find the processes that are tailored to your abilities and needs.

Having a clear and company-wide set of processes is vital. I’ve found Conway’s law [38] to hold in all cases during my working experience. Conway suggests that organizations build systems (in the most general sense) that mirror their communication structure. If your processes and communication structure are not ready yet, your products will mirror that. This may jeopardize your ability to reach your revenue goals. Choosing to maximize revenue first and optimize your processes at a later stage may be a sensible choice under certain circumstances. However, not all technical/managerial choices that we make are reversible. For those choices that could be reversed in the future, the cost of reversing will probably be prohibitive or difficult to estimate.

Mindset

No process will be successful without the appropriate mindset. Evolving from learning requires a mindset that welcomes change and continuous improvement. Sources of learning include customer feedback, employee feedback during retrospective meetings, failures, and successes. Failures don’t have to be fatal to initiate root cause analysis and the like. Near misses where an event could have caused serious problems but it did not should be carefully studied too. 

When something (inevitably) goes wrong in software development it is important not to cultivate a culture of blame. The stakes may be high, but to evolve and improve, your group will have to focus in questions like what happened and how effective was our response, what should we do differently next time, what should we do to be more confident that this will not happen again. Instead of assigning blame, it is far more important to find the mechanisms that can pave the way towards improvement.

Although technical change or a process change or a mentality change is easier said than done, it is crucial to hire people with the right mentality, help them train and evolve. Last but not least, give them the motivation to go beyond their comfort zone.

Metrics

You cannot control anything that you cannot measure and estimate. The need for relative rather than absolute estimation in development tasks is explained nicely in [29]. Although the degree and the nature of control may vary from bureaucratic waterfall-like approaches to lean and agile approaches, metrics are necessary to help to make educated judgments and decisions. Your group’s ability to measure, estimate, re-estimate, and learn will mirror your group’s maturity to a certain extent. What is the difference between a plan and planning? Why do we do planning? Irrespective of how you approach plans and planning, it would be beneficial to list some characteristics of good metrics and their evolution:

  1. Start small. Start with a limited set of metrics and build more metrics slowly.
  2. Use metrics that are simple to interpret. Simplicity is a quality. The more complicated the metrics, the more difficult they are to use.
  3. Use metrics based on data that can be found relatively easily. The more difficult it is to collect data, the greater the chance that it will not worth the effort.
  4. Avoid data errors. Collect data electronically as much as possible. This is the quickest way of data collection and it also avoids the introduction of manual data errors.
  5. Present metrics as simply as possible. Avoid complicated statistical techniques and models during presentations. Use easy-to-understand figures like tables, diagrams, and pie charts.
  6. Be transparent. Provide feedback to the people who have handed in the data as quickly as possible. Show them what you did with data.

Your metrics evolution is another snapshot of your group’s maturity. Once more, the mentality will be a crucial factor in choosing what metrics to use and how to interpret them. They must be interpreted at a group level and not at an individual level. An increased number of tester-issued bugs may have interpretations like the testers are doing a better job or the developers are doing worse. 

In both cases, this should be handled in a way that increases productivity and does not de-motivate. What exactly is an obvious explanation? Is decreased code coverage always bad? Decreased code coverage could occur due to removing unused code covered by the tests. An increase in the amount of code committed, is it good or bad? What if we’ve committed spaghetti code?

Although not all goals are measurable, the following rule may be helpful: If it is not clear what metrics to use, think about what problem you are trying to solve. When the goal is clear, the metrics to use are usually straightforward.

Conclusion

Building teams, groups, and systems can be based on rules. Rules should be easy to follow and general enough to be immediately useful. To evolve they must remain useful in the future. There has to be a balance between generality and detail, complexity and costs, technical focus and business focus, core values, and mission. It is a question of finding the mechanisms to adapt or extend such rules to remain relevant in the future whilst maintaining core values and a clear mission.

As innovation, markets, needs, technology, and culture evolve at a different pace, group software development also evolves, blending business skills, soft-skills, and technical skills. Evolution through learning has a distinctive cultural characteristic, namely, an evolutionary mindset. A mindset that can overcome difficulties, that cultivate team-work and collaboration and makes professional life and personal life a never-ending learning process.

 References

  1. “Programs, life cycles, and laws of software evolution”, Lehman, M. M., Proceedings of the IEEE, Vol. 68, Issue: 9, pp. 1060-1076, Sept. 1980
  2. “Next Generation Java Testing: TestNG and Advanced Concepts”, Beust, C., Suleiman H., Pearson Education Inc., 2008
  3. “Software Engineering”, Sommerville, I., Pearson Education, 10th edition, 2016
  4. https://www.isixsigma.com/tools-templates/software/defect-prevention-reducing-costs-and-enhancing-quality/
  5. “The Systems Bible: The Beginner’s Guide to Systems Large and Small”, Gall, J., The General Systemantics Press, 2002
  6. “The Art of Unit Testing”, Osherove, R., Manning Publications, 2nd Edition, 2013
  7. “Test-Driven Development: By Example”, Beck, K., Addison-Wesley, 2002
  8. “Unit Testing Principles, Practices, and Patterns”, Khorikov, V., Manning Publications, 2020
  9. “Mastering the Requirements Process: Getting requirements right”, Robertson, S. and Robertson, J., 3rd Edition, Addison-Wesley, 2012
  10. “Mastering Non-Functional Requirements”, Paradkar, S., Packt Publishing, 2017
  11. https://xp123.com/articles/invest-in-good-stories-and-smart-tasks/
  12. “Software Architect’s Handbook”, Ingeno, J., Packt Publishing, 2018
  13. “Fundamentals of Software Architecture”, Richards, M., Ford, N., O’Reilly Media, 2020
  14. “Clean Architecture:  A Craftsman's Guide to Software Structure and Design”, Martin, R., C., Prentice Hall, 2017
  15. The Clean Coder: A Code of Conduct for Professional Programmers”, Martin, R., C., Pearson Education Inc, 2011
  16. “Clean Code”, Martin, R., C., Prentice Hall, 2008
  17. Working Effectively with Legacy Code”, Feathers, M., Pearson Education, 2005
  18. The Pragmatic Programmer”, Thomas, D., Hunt, A., Addison-Wesley, 2nd Edition, 2019
  19. Code Complete”, McConnell, S., Microsoft Press, 2nd Edition, 2004
  20. “Building Maintainable Software: Ten Guidelines for Future-Proof Code”, Visser, J., Rigal, S., Wijnholds, G., Van Eck, P., Van der Leek, R., O’Reilly Media, 2016
  21. https://cucumber.io/docs/gherkin/
  22. “Specification by Example”, Adzic, G., Manning Publications, 2011
  23. https://www.mountaingoatsoftware.com/blog/the-forgotten-layer-of-the-test-automation-pyramid
  24. “Agile Testing: A Practical Guide for Testers and Agile Teams”, Crispin, L., Gregory, J., Addison-Wesley, 2009
  25. “More Agile Testing: Learning Journeys for the Whole Team”, Crispin, L., Gregory, J., Addison-Wesley, 2015
  26. “Continuous Delivery in the Wild”, Hodgson, P., O’Reilly Media, 2020
  27. Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation” , Humble, J., Farley, D., Addison-Wesley, 2010
  28. “Site Reliability Engineering: How Google Runs Production Systems”, Beyer, B.,  Jones, C.,  Murphy N., R., Petoff J., O’Reilly Media, 2016
  29. “Agile Estimation and Planning”, Cohn, M., Prentice Hall, 2007
  30. Large-Scale Scrum: More with LeSS”, Larman, C., Vodde, B., Addison-Wesley, 2016
  31. Scaling Lean & Agile Development: Thinking and Organizational Tools for Large-Scale Scrum”, Larman, C., Vodde, B., Addison-Wesley, 2008
  32. Practices for Scaling Lean & Agile Development: Large, Multisite, and Offshore Product Development with Large-Scale Scrum”, Larman, C., Vodde, B., Addison-Wesley, 2010
  33. The Lean Mindset: Ask the Right Questions”, Poppendieck, M., Poppendieck, T., Addison-Wesley, 2013
  34. “Extreme Programming Explained: Embrace Change”, Beck, K., Andres, C., 2nd Edition, 2004
  35. How We Test Software at Microsoft”, Page, A., Johnston, K., Rollison, B., Microsoft Press, 2008
  36. “How Google Tests Software”, Whittaker, J., A., Arbon, J., Carollo, J., Addison-Wesley, 2012
  37. Software Engineering at Google”, Winters, T., Manshreck, T., Wright, H., O’Reilly Media, 2020
  38. https://en.wikipedia.org/wiki/Conway%27s_law
Topics:
software (industry), software - technology

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}