Agile, Waterfall, and Lean are just a few of the project-centric methodologies for software development that you'll find in this Zone. Whether your team is focused on goals like achieving greater speed, having well-defined project scopes, or using fewer resources, the approach you adopt will offer clear guidelines to help structure your team's work. In this Zone, you'll find resources on user stories, implementation examples, and more to help you decide which methodology is the best fit and apply it in your development practices.
Companies are in continuous motion: new requirements, new data streams, and new technologies are popping up every day. When designing new data platforms supporting the needs of your company, failing to perform a complete assessment of the options available can have disastrous effects on a company’s capability to innovate and make sure its data assets are usable and reusable in the long term. Having a standard assessment methodology is an absolute must to avoid personal bias and properly evaluate the various solutions across all the needed axes. The SOFT Methodology provides a comprehensive guide of all the evaluation points to define robust and future-proof data solutions. However, the original blog doesn’t discuss a couple of important factors: why is applying a methodology like SOFT important? And, even more, what risks can we encounter if we’re not doing so? This blog aims to cover both aspects. The Why Data platforms are here to stay: the recent history of technology has told us that data decisions made now have a long-lasting effect. We commonly see a frequent rework of the front end, but radical changes in the back-end data platforms used are rare. Front-end rework can radically change the perception of a product, but when the same is done on a backend the changes are not immediately impacting the end users. Changing the product provider is nowadays quite frictionless, but porting a solution across different backend tech stacks is, despite the eternal promise, very complex and costly, both financially and time-wise. Some options exist to ease the experience, but the code compatibility and performances are never a 100% match. Furthermore, when talking about data solutions, performance consistency is key. Any change in the backend technology is therefore seen as a high-risk scenario, and most of the time refused with the statement “don’t fix what isn’t broken." The fear of change blocks both new tech adoption as well as upgrades of existing solutions. In summary, the world has plenty of examples of companies using backend data platforms chosen ages ago, sometimes with old, unsupported versions. Therefore, any data decision made today needs to be robust and age well in order to support the companies in their future data growth. Having a standard methodology helps understand the playing field, evaluate all the possible directions, and accurately compare the options. The Risks of Being (Data) Stuck Ok, you’re in the long-term game now. Swapping back-end or data pipeline solutions is not easy, therefore selecting the right one is crucial. But what problems will we face if we fail in our selection process? What are the risks of being stuck with a sub-optimal choice? Features When thinking about being stuck, it’s tempting to compare the chosen solution with the new and shiny tooling available at the moment, and their promised future features. New options and functionalities could enhance a company’s productivity, system management, integration, and remove friction at any point of the data journey. Being stuck with a suboptimal solution without a clear innovation path and without any capability to influence its direction puts the company in a potentially weak position regarding innovation. Evaluating the community and the vendors behind a certain technology could help decrease the risk of stagnating tools. It’s very important to evaluate which features and functionality is relevant/needed and define a list of “must haves” to reduce time spent on due diligence. Scaling The SOFT methodology blog post linked above touches on several directions of scaling: human, technological, business case, and financial. Hitting any of these problems could mean that the identified solution: Could not be supported by a lack of talent Could hit technical limits and prevent growth Could expose security/regulatory problems Could be perfectly fine to run on a sandbox, but financially impractical on production-size data volumes Hitting scaling limits, therefore, means that companies adopting a specific technology could be forced to either slow down growth or completely rebuild solutions starting from a different technology choice. Support and Upgrade Path Sometimes the chosen technology advances, but companies are afraid or can’t find the time/budget to upgrade to the new version. The associated risk is that the older the software version, the more complex (and risky) the upgrade path will be. In exceptional circumstances, the upgrade path could not exist, forcing a complete re-implementation of the solution. Support needs a similar discussion: staying on a very old version could mean a premium support fee in the best case or a complete lack of vendor/community help in a vast majority of the scenarios. Community and Talent The risk associated with talent shortage was already covered in the scaling chapter. New development and workload scaling heavily depend on the humans behind the tool. Moreover, not evaluating the community and talent pool behind a certain technology decision could create support problems once the chosen solution becomes mature and the first set of developers/supporters leave the company without proper replacement. The lack of a vibrant community around a data solution could rapidly decrease the talent pool, creating issues for new features, new developments, and existing support. Performance It’s impossible to know what the future will hold in terms of new technologies and integrations. But selecting a closed solution, with limited (or no) capabilities of integration forces companies to run only “at the speed of the chosen technology,” exposing companies to a risk of not being able to unleash new use cases because of technical limitations. Moreover, not paying attention to the speed of development and recovery could expose limits on the innovation and resilience fronts. Black Box When defining new data solutions, an important aspect is an ability to make data assets and related pipelines discoverable and understandable. Dealing with a black box approach means exposing companies to repeated efforts and inconsistent results which decrease the trust in the solution and open the door to misalignments in the results across departments. Overthinking The opposite risk is overthinking: the more time spent evaluating solutions, the more technologies, options, and needs will pile up, making the final decision process even longer. An inventory of the needs, timeframes, and acceptable performance is necessary to reduce the scope, take a decision, and start implementing. Conclusion When designing a data platform, it is very important to address the right questions and avoid the “risk of being stuck." The SOFT Methodology aims at providing all the important questions you should ask yourself in order to avoid pitfalls and create a robust solution. Do you feel all the risks are covered? Have a different opinion? Let me know!
When two people get together to write code on a single computer, it is given the name of pair programming. Pair programming was popularized by the eXtreme programming book by Kent Beck, in there he reports the technique to develop software in pairs which spiked the interest of researchers in the subject. Lan Cao and Peng Xu found that pair programming leads to a deeper level of thinking and engagement in the task at hand. Pair programming also carries different approaches. There are different styles of pair programming, such as the drive/navigator, ping-pong, strong-style, and pair development. All of them are well described by Birgitta Böckeler and Nina Siessegger. Their article describes the approach to how to practice each style. Here, we will focus especially on only two of them: the drive/navigator and ping-pong, as it seems that both are the most commonly used. The objective is to have a look at what should be avoided when developing software in pairs. First, we briefly introduce each pair programming style, and then we follow the behaviors to avoid. Driver/Navigator For my own taste, the driver and navigator are the most popular among practitioners. In this style, the driver is the one that is writing the code and thinking about the solution in place to make concrete steps to advance in the task at hand. The navigator, on the other hand, is watching the driver and also giving insights on the task at hand. But not only that, the navigator is the one thinking in a broader way, and she's also in charge of giving support. The communication between the driver and navigator is constant. This style also is the one that fits well with the Pomodoro technique. Ping/Pong Ping-pong is the style that "embraces the Test Driven Development" methodology; the reason behind that is the way in which that dynamic works. Let's assume we have a pair that will start working together, Sandra and Clara. The ping/pong session should go something similar to the following: Sandra start writing a failing test Clara makes the test to pass Now, Clara can decide if she wants to refactor Clara now writes a failing test for Sandra The loop repeats It is also possible to expand the ping/pong to a broader approach. One might start a session writing a class diagram, and the next person in the pair implements the first set of classes. Regardless of the style, what is key to the success of pair programming is collaboration. Behaviors To Avoid Despite its popularity, pair programming seems to be a methodology that is not wildly adopted by the industry. When it is, it might vary on what "pair" and "programming" means given a specific context. Sometimes pair programming is used in specific moments throughout the day of practitioners, as reported by Lauren Peate on the podcast Software Engineering Unlocked hosted by Michaela Greiler to fulfill specific tasks. But, in the XP, pair programming is the default approach to developing all the aspects of the software. Due to the variation and interpretation of what pair programming is, companies that adopt it might face some miss conceptions of how to practice it. Often, this is the root cause of having a poor experience while pairing. Lack of soft (social) skills Lack of knowledge in the practice of pair programming In the following sections, we will go over some miss conceptions of the practice. Avoiding those might lead to a better experience when pairing. Lack of Communication The driver and navigator is the style that requires the pair to focus on a single problem at once. Therefore, the navigator is the one that should give support and question the driver's decisions to keep both in sync. When it does not happen, the collaboration session might suffer from a lack of interaction between the pair. The first miss conception of the driver/navigator approach is that the navigator just watches the driver and does nothing; it should be the opposite. As much communication as possible is a sign that the pair is progressing. Of course, we haven't mentioned the knowledge variance that the drive and navigator might have. Multi-Tasking Checking the phone for notifications or deviating attention to another thing that is not the problem at hand is a warning that the pair is not in sync. The advent of remote pair programming sessions might even facilitate such distraction during the session. The navigator should give as much support as possible and even more when the driver is blocked for whatever reason. Some activities that the navigator might want to perform: Checking documentation for the piece of code that the driver is writing Verifying if the task at hand goes towards the end goal of the task (it should prevent the pair from going into a path that is out of scope) Control the Pomodoro cycle if agreed On the other hand, the driver is also expected to write the code and not just be the navigator's puppet. When it happens, the collaboration in the session might suffer, leading to a heavy load on the navigator.
Big teams often struggle with communication, coordination, decision-making, and delivery of large-scale projects. Agile provides a framework to help reduce these issues, allowing teams to move quickly and adapt to changes. It encourages teams to work together more collaboratively, breaking down large projects into smaller, manageable chunks. Agile also helps to prioritize tasks, identify and manage dependencies, and provide clarity for the overall project goals. This helps big teams to stay organized and on track and to make sure everyone is working towards the same objectives. Perhaps you’ve heard about Scaled Agile Frameworks and wonder whether it’s worth investing in one of them to get you to the next level. But, how far you can go with Agile frameworks? There are some blind spots that should be considered when using Agile frameworks. One of the main problems with Agile frameworks is that they structure communication in a way where domain experts are at the top, and their will is taken to production. This poses a risk because there is no guarantee that the software engineers will implement the same thing as the domain experts expect. Therefore, it is important to ensure that all stakeholders are included in the discussion, and software engineers must have a clear understanding of what is going on as Alberto Brandolini says: It is not the domain expert’s knowledge that goes into production, it is the assumption of the developers that goes into production. How are developers supposed to solve a problem when they do not know what the problem is? Are developers’ assumptions right? Is there any shared understanding? This article is about how we can use BDD and DDD tools and technics to overcome complexities, blind spots, and misunderstandings to properly knowledge-crunching and find a ubiquitous language. Domain-Driven Design Is Linguistic In order to communicate effectively, domain experts and the development team must have a shared understanding. Developers do not need to be experts, but they must be familiar with the terms and their context in that domain. For example, if they are working in the health domain, they do not need to know how to treat a patient, but they do need to know the exact meaning of the word shift in that domain. It is not enough to just know the meaning of the term; they must also understand the context in which it is used. Separation of the same term in different contexts is at the core of Domain-Driven Design. Though a term may have the same meaning within two different contexts, this is not our primary consideration. Instead, it is the context that defines the meaning of the specific term that matters most. Let me clarify further. Imagine a cup of coffee with a specific cup, taste, and even brand. When it is served in a coffee shop, it has value and you will pay for it. But, if the same coffee is left on a bench in a park, would you pay for it or even drink it? Of course not - it is the same coffee yet in a different context. This is why context matters. If OOP is in terms of object, DDD is in terms of context. Many developers make mistakes and unintendedly think in terms of data and taking care of certain states rather than behavior. Behavior is not data: data is the product of behaviors under fixed circumstances and it’s what we might make a decision based on. In fact, the behavior that leads to a certain state is more important than the actual state. We can prove this by math: f(x) = y and g(x) = y =/=> f = g. We can not conclude f and g are replaceable based on their final state. As I mentioned above, we preserve the meaning in a specific context, instead of the final state. Remember the coffee example: we do not care how the coffee is made (as it is a normal cup of coffee), but we do care about the context. Coffee is our data, and the coffee shop and park are the contexts in which its behavior is defined. Can we replace them? I mean, we see the final output as a normal cup of coffee - we cannot replace the park with a coffee shop. In other words, we cannot say, “I want a cup of coffee, I can take it from the park or coffee shop - who cares?” We separate park coffee and coffee shop coffee with the help of bounded contexts in Domain-Driven Design (DDD). We encapsulate and make ubiquitous each term within its own context. This means that we would not have a global ubiquitous language; instead, each bounded context would have its own ubiquitous language. Miscommunication during knowledge crunching sessions would have different reasons, such as cognitive bias, which is a type of error in reasoning, decision-making, and perception that occurs due to the way our brains perceive and process information. This type of bias occurs when an individual’s cognitive processes lead them to form inaccurate conclusions or make irrational decisions. For example, when betting on a roulette table, if previous outcomes have landed on red, then we might mistakenly assume that the next outcome will be black; however, these events are independent of each other (i.e., the probability of their results do not affect each other). Also, apophenia is the tendency to perceive meaningful connections between unrelated things, such as conspiracy theories or the moment we think we get it but actually, we do not get it. A good example of this could be an image sent from Mars that includes a shape on a rock that you might think is the face of an alien, but it’s just a random shape of a rock. Sometimes people lie, not in a bad way; I mean, not on purpose. Imagine while you are working on your laptop, your roommate might ask when you to go to the kitchen to turn the light off. After 20 minutes, you go but forget to turn the light off—without any intention. These are inevitable during collaboration sessions, and we need to find ways to reduce them. Developers should collaborate effectively to create a ubiquitous language and have a clear understanding of the different terms in the domain to create proper bounded contexts. However, making sure they understand every problem in the domain is a difficult task. The proper way to address this problem can only be achieved by employing DDD and BDD. These approaches have different tools to cover every blind spot and minimize ambiguity. Domain (Problem Space) For this article, I define a straightforward domain for an imaginary camper-vans rental company within Texas. The main service of the company is renting out campervans, which are managed in the HQ. Clients can only rent available campervans, each of which is separated by its unique car tag. They must be picked up or returned at one of the company’s stations (e.g., Houston, San Antonio, Dallas, Austin, etc.). There is no limitation regarding the return station; that is, campervans can be returned to any of the stations. Clients can cancel their rent before pick up, which means that they would not be able to cancel after they picked up the campervan. Also, they can not have three cancel in a row else their account would be limited. But, they must be returned before the due date of rent. If the client returns late, they must pay a penalty. The penalty is currently a fixed price. Every campervan can be serviced and repaired in the company’s repair garage. Whenever a campervan is in the repair process, it is not available for rent. Every campervan must be repaired after five rentals or three months after the last repair. The company has portable equipment (like portable toilets, bed sheets, sleeping bags, camping tables, chairs, etc.); equipment is added or removed with their respective stock in the HQ of the company. Clients can book any number and type of available equipment for their rent, in addition to their campervan. Equipment is stored at stations and has a limited count at any given point in time. Once a client drives off the station, the available amount of equipment at that station is reduced (by the number of equipment the client took with them) and when the client returns the campervan, the number of equipment at the station is increased accordingly. Since the amount of equipment is limited and during the high season the number of campervans and the equipment used simultaneously is the highest, your business needs to plan ahead for the amount of equipment needed at each station per day in the future. This mitigates the risk of running out of equipment. For simplicity, I have omitted the payment part. Let’s use event storming and BDD to model our domain. Event Storming One of the best approaches for knowledge crunching is event storming. It is a flexible workshop format for the collaborative exploration of complex business domains. For event storming, I will come up with a series of articles soon, but for this article, I will have a glimpse of it. What makes event storming so efficient? It is a rapid, lightweight, intense, highly interactive, and collaborative workshop that helps to build ubiquitous language. It is the most important part of the whole storming, where the common goal is to share the maximum domain knowledge from each of the participants. There are different steps of event storming workshops: a big picture, process modeling, and design. A big picture session is usually used to discover the business domain and share knowledge. Process modeling and design are more focused on system design, and defining aggregates, and involve developers, product owners, UX/UI designers, and engineering managers. Let’s start off with big picture, like a blind man in the room. You may tackle the whole business in this session. As you know in enterprise companies knowledge is shared. Each department has its own expert who does not know about others or knows little. The real value of any brainstorming is people and their knowledge. Therefore, for a successful event storming, inviting the right people with appropriate knowledge is essential as well. It starts with chaotic exploration, sticking orange sticky notes to the wall. On each note, there is a past sentence that describes a domain event. Domain Events Each event is expressed in a past term verb, written down on an orange sticky note, and their respective actors. Here are a few points to help you understand what domain events are: You could read about them in domain books. Domain experts understand them. Writing them in the past tense is a trick to create meaningful events. They are not actions of someone or something. Even though some events will result from actions, we are not interested in actions yet. They are not technical, and should not be specific to our system’s implementation. In the imaginary camper-vans rental company domain, the chaotic exploration phase would be like this: Whenever we come across or agree on a domain word, feel free to write a definition for it on a large yellow sticky note. This is a way to build up a domain of ubiquitous language. This is very helpful to improve communication between all of us. This in turn improves how we work in many different aspects. It should be added to the wall like this: What about a question we cannot answer something that does not seem right, or any problem we should look into? we use purple sticks to park "problems." After this, ask attendees to identify actors (users with a role) that trigger or respond to events. The convention is to use a small yellow sticky note for that. Note: There is no need to add an actor to every event, sticking one at the beginning of a chain of events is enough. Similarly, complex systems also interact with external systems. External systems are not humans, but they could be an online API for example. The convention is to use blue post-its for external systems. Place some at a place where the events interact with them. Command Now, it’s time to focus on the command that triggers appropriate events. Write down "Command, important for the business," on a light blue sticky note and place them left in the event they spawn. A command is a message that represents the intention of a user and could be expressed as an action, like request booking, cancel booking, request refund, etc. Policy Policy artifact is used to document conditions and policies for events to happen. So, on a storming wall, a policy stays between a domain event and a command. Policies are formalized like, "Whenever…X, then…Y," or, "If…X, then…Y." Imagine when a client picks up a camper van we need to update the related equipment stock for the origin and destination stations. For the destination, it must be applied to the expected return date. The policy would be an Equipment Stock Change Policy which states that if a client picks up a camper van, then update the related stock. Aggregate In Domain-Driven Design, aggregates are a cluster of domain objects which can be treated as a single unit. An aggregate will have one of its component objects be the aggregate root. The aggregate root is the only member of the aggregate that outside objects are allowed to hold references. All of the other objects within the aggregate will only be accessed through the aggregate root. This enforces data integrity and consistency within the aggregate. The hard part of Domain-Driven Design is identifying aggregates: it’s always hard and confusing. However, with event storming, this would be much easier and more understandable. In this case, go through all commands and events that are not linked by an external system. Add an empty yellow sticky note there. Please don’t call them aggregates. It’s going to work better if you call them "Business Rules." Ask participants to fill in these business rules: Preconditions: These are things that must be true before a method is called. The method tells clients “this is what I expect from you. "Postconditions: These are the things that must be true after the method is complete. The method tells clients “this is what I promise to do for you." Invariants: Invariants are the things that are always true and won’t change. The method tells clients, if this was true before you called me, I promise it’ll still be true when I’m done. Let’s gather business rules, commands, and events that occur together, no matter where they are in the process. The last step is finding a proper name for your aggregate. Also, I identified bounded contexts: What Is BDD(Behavior-Driven Development)? BDD is a collaborative practice. These days engineers think BDD is Gherkin (given, when, then) or they might know it as Cucumber, which is totally wrong. I mean, BDD is not about UI, log-in, etc. BDD is an iterative approach to fill gaps in Agile. BDD starts off with a team and the team needs to discover first then formalize and at the end implement. The discovery phase is an activity that leads to ubiquitous language through collaboration through examples and conversation on rules. Due to its nature, the discovery practice works best when the team discusses the requirements together. The discussion of a user story usually generates examples. Collecting examples can be started by focusing on simple questions, such as “Could you please give an example for this?” We collect examples for the user story’s rules, which are already defined in the event storm. After that, take examples, and recheck what is going on in the event storming wall to see if we can find any errors or realignments. This will help to improve and justify. At this stage, it is perfectly acceptable to capture the examples as a list of steps that describe the behavior of the system in a particular case. In the formulation step, we transform the examples into scenarios using the "Given/When/Then" keywords. The format of the scenarios and keywords is called Gherkin. The scenarios written with the Gherkin syntax can be executed as automated tests by different tools, like Cucumber. Write Gherkin for rules that have priority. A sample Gerkin for the rental company domain would be: Scenario: As a client, I want to update my rent date. Given that I have a rental request for a campervan When I decide to update my rent Then the rent should be canceled, and I should be able to rent a new campervan on the desired date. Good Gherkin scenarios are business readable, and via automation, they verify the application. When they fail, the failure is understandable both by the business side and the delivery side of the project. So essentially they make a connection between the problem and the solution or in other words the members of the distributed team. Scenarios represent our shared knowledge and shared responsibility to produce quality software. Conclusion Every tool has its own blind spots and every blind spot might be million dollar mistake. Let’s use DDD and BDD tools together to build better models.
DevOps has brought the topic of an organizational culture firmly to the table. While culture was always an element of Agile and Lean, the research into DevOps has shown it's just as important as the more technical capabilities. The DevOps structural equation model has several elements related to people and culture, so it's clear that human issues are an important part of the DevOps picture. The five cultural capabilities in the model are: Climate for learning Westrum organizational culture Psychological safety Job satisfaction Identity The cultural capabilities drive software delivery and operations performance, which predict successful business outcomes. At the same time, there have been several HR hot topics across all industries that have trended over the past few years: Work to rule (quiet quitting) The Great Resignation / Reshuffle The four-day workweek Hybrid working As Emily Freeman (author of DevOps for Dummies) said: "The biggest challenges facing tech aren't technical, but human." So, where should you start when it comes to understanding culture in the context of DevOps? The Fundamental Assumption In 1960, Douglas McGregor published a book called The Human Side of Enterprise. In the book, he described how a fundamental assumption about human behavior results in different management styles. You either believe that: Theory X: People don't want to work and need to be motivated by rewards and punishments. Theory Y: People are intrinsically motivated to do good work. Many decisions in the workplace involve a trade-off, but these fundamental assumptions are mutually exclusive. If you believe Theory X, you: Centralize decision-making. Track individual performance. Use rewards and punishments to motivate workers. With Theory Y, you: Focus on setting clear goals. Let people direct their own efforts. When you follow Theory Y, employees become the organization's most valuable asset. McGregor considered Theory X and Theory Y to be two options a manager would choose from after assessing a workplace. First, you'd review the work and the people and decide whether you need an authoritarian style or a more hands-off approach. We've since learned through the study of system failures that cultures with high trust and low blame are safer than bureaucratic or pathological cultures. Theory Y is foundational to Lean, Agile, and DevOps and is the underlying assumption of a generative culture. Mission Command Although military organizations are traditionally seen as Theory X cultures, modern military units operate using mission command. The mission command pattern decentralizes decision-making by providing clear goals. As a result, the soldiers with boots on the ground can respond dynamically as events unfold rather than waiting for orders. This is the application of Theory Y culture. The civilian version of this is called workplace empowerment, which requires that: You share information with everyone. You create autonomy through boundaries. You replace hierarchy with self-directed teams. Workplace empowerment combines centralized intent with decentralized execution. In software delivery, this typically involves a shared vision implemented by a cross-functional, self-organizing team. Culture Predicts Safety When you feel safe speaking up, nobody will be blamed, and near-misses and minor faults fuel learning. Each incident results in positive action to make the workplace safer, whether the industry is manufacturing, nuclear power plant, aviation, or software delivery. If you don't feel safe to report close calls, the unspoken risks accumulate until, very often, a disaster happens. You don't have to be in a safety-critical industry to benefit from this relationship. The same cultural traits that predict safety are also related to communication, collaboration, innovation, and problem-solving. In addition, culture affects the flow of information, which is critical to all these activities. "In 2022, we found that the biggest predictor of an organization's application-development security practices was cultural, not technical: high-trust, low-blame cultures focused on performance were significantly more likely to adopt emerging security practices." The Accelerate State of DevOps Report, 2022 Theory X management restricts the flow of information and limits who can take action. Managers draw information up and pass decisions back down. Theory Y leadership leads to strong information flow and prompt action in response. Information flows freely, and decisions are made close to the work. Changing Culture Changing team and organization culture is one of the toughest challenges in software delivery. Not even the most complex automation task in your deployment pipeline comes close. You need a clear vision of your intended future state, which needs to be pushed rapidly, firmly, and regularly to ensure the goal remains clear. You need leaders and managers to understand their roles are to enable self-organizing teams that use each team member's talent. You need to move away from systems that centralize information and decision-making and transfer to systems aligned to distributed responsibility. For example, suppose you use centralized tools to organize tasks and assign them to people. In that case, you need to move to a system that aligns with setting a clear mission without removing the ability of teams to self-organize and respond to dynamic situations. You may need to replace a tool entirely or use the tool in a new way. Your Gantt charts might have to go, but your task-tracking app can remain if the team can re-purpose it. The leadership role in a culture change is to: Relentlessly push the desired end state. Reinforce the role of leaders and managers as enablers. Ensure teams become self-organized. A healthy culture should also be clear about the importance of the flow of information and must set a standard for communication style. We follow the Radical Candor approach. This lets us be direct in our communications but in a framework where we all care about each other. Radical Candor lets individuals show courage and challenge others when they might otherwise remain silent. This ultimately means we can all work better without harmful or toxic behavior. You won't make a dent in culture without a clear, robust, and sustained push. You have to overcome inertia and battle organizational immune responses. Despite the difficulty, the research is conclusive that culture is vital to high performance. Conclusion When people talk about culture in the context of DevOps, they're referring to Westrum's generative culture, which is based on McGregor's Theory Y assumption. Simply put, you should aim for a clear, shared mission combined with decentralized decision-making. All modern software delivery methods refer to this concept of empowered teams in different ways. We refer to this as modern workplace culture, yet the ideas are over 100 years old. For example, mission command dates back to the 1800s, Theories X and Y are explained in a book from 1960, and Westrum's typology of organizational cultures was designed in the 1980s. You'll find culture is the toughest nut to crack in DevOps. It's tempting to rely on research and statistics to prove the case for a generative culture. Still, the reality is that cultural change depends on compelling storytelling and creating a compelling vision of what the organization will look like after the transition.
TDD has been a subject of interest for practitioners at least for the last ten years or so, even before that if we take into account the eXtreme Programming practices and the agile manifesto. Despite its claimed popularity today and its symbolism of quality the practice of writing the test before production code is still uneven. It varies based on the practitioner's context, past experiences, and the practitioner's learning path. We could elaborate further on the uneven knowledge of TDD starting from the formal education on the subject, therefore, it might require even more discussions about its applicability. Is it possible to teach effectively TDD without professional project experience? Some might argue that it is possible, while others will say the opposite. Despite great content published by renowned publishers such as O'Reilly, packt, # Addison-Wesley Professional Computing Series, Apress, and Manning the practice of TDD is still a challenge, even the best books, the best examples, cannot automatically translate its content to the unique problems that practitioners face on the daily basis. Katas are a tool that might be used to fill in this gap for both: formality in learning TDD and uniqueness problems that practitioners face. Practicing with katas is not a replacement, it can be understood as an aid instead. The Mismatch With Real Problems Practitioners have tried different approaches to internalize test-driven development. Despite the effort, the mismatch between training and production code exists. The patterns found in practicing Katas are close to green field projects. In the day-to-day, it is most likely that practitioners will join a brownfield project that is not that friendly to maintain. There are books that focus only on this aspect of things, for example, Working efficiently with legacy code by Michael Feathers, Refactoring: Improving the Design of Existing Code by Martin Fowler, Refactoring to Patterns by Joshua Kerievsky, and many more. The patterns that practitioners use for Katas but are usually a mismatch with production code that frequently appears together are: Approach to test from the outside - when should I switch to the unit? Persistent layer - I use an ORM (Object Relational Mapping) or the layers of my application are mixed together There are different approaches that one might take to write code, what is usually shared across the source code is the technique of splitting problems and then combining all the pieces to solve the problem. Let's dive into the chunking and what it means to use it to start tackling the transition from Katas to production code. Chunking The process of chunking happens without one noticing it, but practitioners are experts in this technique. The chunking approach is described by Learning how to learn by Barbara Oakley and also depicted by Felienne Hermans in her book The programmer's Brain. The process is the same as an algorithm: given a complex problem, what are the pieces that compose the problem? and what are the pieces that can be split? With each step, move forward the needle to get the problem solved. Splitting the problem is important, as it gives room for our brain to work without overloading it, we do have limitations to working with information. Looking specifically at practitioners, this is one of the reasons that one might not have the entire system architecture in her head as described in What Makes A Great Software Engineer? by Paul Luo Li, Amy J. Ko, and Jiamin Zhu. Taking a step back, if we are talking about Katas, they are the first step of chunking. In this stage, we are focused (but limited) on: Learn something new (such as the practice of writing tests first) Sharp a skill, given that TDD is a known subject, one might want to try different styles. Such as: with or without test doubles, a new architectural style, a new programming language, etc. Without this first step, it might be difficult for practitioners, on the job, to learn TDD, learn baby steps, learn simple design, learn to refactor, learn architectural styles that might fit the problem at hand, learn the pragmatic approach to a problem, and so on. There is a lot to take into account, Katas abstract that away, and focuses on a single technique that is at hand. For example, take the following list(that is not exhaustive) as the focus point: fizzbuzz focuses on baby steps mars rover focuses on the TDD flow smart fridge focuses on test doubles gilded rose focuses on legacy code Katas are here to ease the process of learning the techniques that are used on the daily basis by practitioners. Of course, this is just the first step, the first chunk that allows practitioners to become effective in their work. Expanding to Production Code Moving from a Kata setting to a professional project that is in production is not as transparent as it might look. Let's take into account brownfield projects, which are the most likely faced project by practitioners. The first barrier that is not easily transported from Katas is that the code might be mixing too many responsibilities in a single class, or that there is too much to understand of the code, as it was a developer that left the company already written, or the dependencies of the project are too many. Regardless of what it might be, the challenges sum up in the bill. Referring back to chunking, the first step here is to identify a single point that can be tackled. This is an important step, as the technique is already in training. Focusing on a single aspect at a time to improve production code plays an important role. Let's think about the following scenario: We have a application that was developed with a MVC (model-View-Control) framework, but the layers have been mixed, there is no clear layering going on, besides that, there are no testing in place, the application is mainly tested through a manual approach. To top that, practitioners want to apply the new techniques they've learned to make the code maintainable. As we already discussed, the key point here is to identify the pieces of the puzzle first. Trying to tackle all the problems that were listed at once might lead to more harm than bring benefits. Let's enumerate the key chunks: Mixed business logic between layers It is difficult to read a line between the layers, often leading to a manual end-to-end test - if it were an API, it would be done through post, if it were a web application, it would be done accessing the browser and navigating as the end user would do. There is no automated testing in place Practitioners want to apply new techniques learned An example of an approach could be implementing a new algorithm that performs faster Restructure the code to fit an architectural style If we think about them one by one, we start to see a correlation with specific Katas we might want to perform: Mixed Business Logic Between Layers Gilded rose is a good candidate for that, applying the technique of golden master, helps to improve the internal structure of the code. There Is no Automated Testing in Place Once again, Gilded rose allows the creation of new test cases that do not exists, as the previous step, used the golden master, now the code should allow practitioners to write new tests that can code specific edge cases that weren't before. Practitioners Want to Apply New Techniques Learned At this point in time, the two previous steps are in place already, with that, the code should be testable enough that the production code is not highly coupled with the test code - This should be a health check, before implementing the new techniques. Can you answer the question: If I refactor, the tests don't change and I have the confidence to release? If the answer is yes, then applying new techniques learned should be doable.
In today’s increasingly digital world, we have become more reliant on online applications and services. We depend on these technologies daily and expect them to function as intended whenever we access them. Because of this digital proliferation, IT leaders have prioritized continuous availability. Teams want to reduce downtime where possible because downtime leads to poor customer experience and negative reviews. As a result, potential customers have second thoughts, and established customers leave to pursue more available options. Teams invest in monitoring tools to maintain business-critical uptime. However, multiple single-domain monitoring tools may begin to overwhelm teams as IT stacks grow more complex. The average team has 16 monitoring tools, and some have as many as 40, according to the Moogsoft State of Availability Report. This means IT teams have to monitor 16-40 separate tools simultaneously. All this tool surveillance is inconvenient and risky — the more tools to look after, the higher the likelihood of the team missing important information among all the noise. Additionally, monitoring takes up to 20% of a team’s time — time better dedicated to innovation and improvements. Even with the major time investment, teams still struggle with incident detection. Despite all the tools, customers are still the first to flag problems 45% of the time. So what’s the value of all the monitoring tools if they only catch issues about half the time? DevOps and SRE (site reliability engineering) teams need a more efficient monitoring approach that increases availability and optimizes the customer experience. The Issue: Incomplete Information Incident management point solution tools solve specific problems within the digital experience, IT infrastructure, application, or network. As the historical solution to monitoring, point solutions have perfected their piece of the availability puzzle. However, these solutions do not talk to one another, resulting in silos that obscure the big-picture view of the IT ecosystem. Point solution pitfalls include: Cost and Inefficiency With many tools come many licenses, and those expenses add up quickly. Also costly is the time engineers must spend babysitting the disparate monitoring tools and the data they generate. Research shows engineers spend more time supervising tools and “context-switching” than anything else, including engaging in productive, value-adding work. Silos That Slow Progress With so many monitoring tools to watch, information becomes lost within individual tools. Even if the information escapes its silo, engineers can miss important context when assembling the full view of the incident. These information gaps slow communication, delay mean time to recovery (MTTR), and extend downtime. Needless Noise When teams work with multiple-point solutions, separate tools redundantly report interconnected issues. This overlapping information inflates the number of alerts the team must sift through to find the incident’s origin. In addition, extraneous noise and irrelevant alerts extend incident timelines and MTTR. The Streamlined Solution: Tie Your Tools Together With AIOps A plethora of monitoring tools means engineers need a way to thoughtfully connect them to see the forest (the entire IT ecosystem) for the trees (the individual point solutions). Domain-agnostic artificial intelligence for IT operations (AIOps) links these tools and aggregates monitoring data. AIOps — the future of IT operations — combines automation with expert supervision of a single tool. With the ever-increasing amount of data tools generates, no one can manage all of it manually. AIOps can help increase uptime and availability by detecting anomalies before they escalate into an incident. AIOps alerts the human team and presents this information so they can fix the situation quickly. An integrated AIOps approach offers many advantages, including: One Platform AIOps centralizes the information from many monitoring tools to give a big-picture view of the entire system’s health. Instead of jumping between individual tools to gather data, an engineer gains a holistic view in a single dashboard. AIOps summarizes information so it’s understandable at a glance. When an incident occurs, AIOps automates the workflow to simplify incident response, thereby decreasing MTTR. System Optimization AIOps consolidates alerts from multiple monitoring tools, organizing and contextualizing information. This enriched data is more informative and actionable than the siloed data generated by point solutions. The system reduces noise, teams detect incident origins more quickly and MTTR decreases. Incident Lifecycle Insight AIOps implementation creates a singular place for engineers to engage with incidents and track them through their entire lifecycle. A single line of sight during the incident’s lifespan improves resolution efficiency and reduces downtime. AIOps Saves Time and Resources Beyond just reducing downtime, AIOps can boost employee satisfaction by automating time-consuming and repetitive tasks. This automation reduces employee toil and frees them to work on interesting, fulfilling projects, and increases productivity, which leads to happier employees. AIOps’ automation also reduces operational costs. Manually managing incidents is labor- and time-intensive, leading organizations to hire additional employees to try to keep up. AIOps automates workflows, improving efficiency so organizations can best manage their headcount. So why isn’t everyone using AIOps? A common misconception is that new technology means significant change management, major spending, and complicated new processes. However, with the proliferation of software as a service (SaaS), AIOps implementation is remarkably less complicated and requires fewer resources than previous deployments in on-premise data centers, and its value is swiftly apparent. Further, AIOps for SaaS incorporate the myriad benefits inherent to SaaS products, such as scalability based on business needs and minimal ongoing maintenance. In addition, AIOps works with SaaS products, further increasing its value proposition for complicated IT environments. In the ultra-competitive digital world, complicated IT environments can’t rely merely on numerous monitoring tools. Multiple tools create delays and downtime — and unhappy customers. AIOps solutions offer engineers a holistic view of the incident lifecycle, facilitate issue identification and resolution and ultimately lead to improved availability and better customer experience.
Overview “Eat your veggies!” “Exercise multiple times a week!” “Brush your teeth and floss daily!” Such are the exhortations that every child has heard (many times!) and grown to loathe. However, these are not practices designed solely to make one suffer: They are encouragements to help one develop and maintain good hygiene. Dictionary.com defines hygiene as “A condition or practice conducive to the preservation of health, as cleanliness.” As in, “If you do these things consistently, you’ll reduce the chance of bad things happening to your health in the future.“ However much having to keep brushing teeth daily is a pain, having to get the dentist to pull said teeth due to a neglect of this care is going to be much more painful! This is a concept that can be easily applied to software engineering as well. Software projects have their own maintenance aspects outside of the main code development tasks: documentation, dependency management, deployment, and so on. Supporting these aspects might not be the most exciting or intellectually-stimulating work, but just like human hygiene, such is the nature of supporting a project’s hygiene: a lack of doing so could cause major pain for the developer in the future. Thus, when developing the practices and activities for a software development team, try making it a practice to develop a plan for maintaining the project’s hygiene via regularly scheduled activities that can be incorporated into the project’s development plan. Concepts Dependencies Virtually all software projects nowadays are built using external software libraries: from the third-party frameworks like Spring or Django to the programming language in which the code for the project is written (Java, C++, Python, and so on). These external libraries, naturally, are software projects themselves; in the majority of cases, the developers behind these projects are actively maintaining them and publishing new versions of the code at varying points in time. Points to Ponder How will the project team know when a new version has been released for any of the project’s software dependencies? Manually checking for new versions can be an arduous process; automated detection tools like Dependabot can reduce or eliminate this type of toil. Once a new version of a software dependency is released, what will the protocol be for evaluating when to incorporate it into the project? Save for unresolvable breaking changes, a new version of a software dependency should be incorporated as soon as possible for a project. While it’s not delivering immediate value to the project, it enforces the habit of keeping the project’s code up-to-date in conformance with the dependencies that it uses. Sticking to an older version of a dependency because “it just works” is fine — until it isn’t. One day, the announcement from the internet may arrive that it is absolutely necessary to upgrade; suddenly, it could be necessary to jump several years’ worth of dependency versions. What could have been a series of small changes to the project’s code every so often is now one large set of unplanned work that needs to be done yesterday. Deprecations As mentioned above, software is frequently in a constant state of improvement and refinement; sometimes, this does not mean solely adding new code and functionality, but also taking code and functionality away. Ideally, software developers will indicate code and functionality that they eventually intend to remove by marking it as deprecated. Points to Ponder When code in an external dependency has been deprecated, what will the protocol be for replacing it? Think of deprecation flags as a good-will gesture: the developers of the external dependency have announced that they will be making a code change that will require changes in the code that uses the dependency, but they will give external users the benefit of time to prepare their own code for the change. Take advantage of that courtesy and plan likewise for how and when to make the proper adjustments, as eventually, a new version of the external dependency will be published in which that deprecated code will no longer exist. Once a software dependency has been deprecated, what will the protocol be for replacing it? Sometimes, deprecation occurs not only to specific parts of a software project’s code, but to the entire project itself. An example of this was Netflix’s Hystrix library, for which the company announced in 2018 that it would cease active development. When cases like this occur, it is advisable to devote time to investigating how to replace its usage in a project’s code, mostly for the same reasons as with code dependencies: Eventually, some event like an announced security vulnerability might compel the upgrade, at which point the work would become necessary but also would not have been planned for. Workflow The software development life cycle contains many steps besides the software development itself: testing, code linting, deployment, and so on. Frequently, what were once steps that required manual action in order to be completed have fallen under the purview of software programs to be configured and then executed as part of an automated process. Indeed, the concept of automating the various parts of the software development life cycle — now known as DevOps — has grown rapidly in the past decade into its own career path. Points to Ponder How will the team determine whether any processes in the development workflow could be automated? Continuous Integration/Deployment systems like CircleCI and Jenkins already provide a multitude of capabilities for automating various parts of the development life cycle; however, sometimes there exist opportunities for creating further automation in a project outside of CI/CD environments. For example, if a project includes the development of multiple microservices, it might be worthwhile to devote time to investigating whether the creation of a home-grown equivalent to a programmatic initializer service like Spring Initializer might be worthwhile. How will the team evaluate whether any steps of the workflow could be re-arranged? Even if a verification step like code linting might already be a defined part of the automated CI/CD process, moving that step into an earlier part of the development process could reduce the turnaround time between the developer creating changes and finding out whether those changes could be improved. Git provides hooks wherein it will execute a script at a before or after a step of the Git process; in this scenario, the code linting step could be executed in the pre-commit hook so that the developer is notified about a potential code smell immediately instead of eventually receiving a message from the CI/CD at a later point. If any parts of the development workflow have been disabled, when will the team re-examine whether they could be re-enabled? Sometimes, code or development life cycle steps need to be disabled for a variety of reasons, e.g. an external test dependency that has suddenly become unreliable. This should, however, be an expressly-temporary measure. If left unresolved, the reason why the measure was intended to be temporary could eventually be forgotten, and the measure will ossify and become just another “that’s just how things are” that could potentially prevent other issues from being detected. It is worthwhile to schedule a periodic check-up of the project code base and development process to detect whether a “temporary” measure has indeed become de-facto permanent and what would need to be done to resolve the underlying issue. Documentation Ah, writing documentation — the software developer’s favorite task! /s However, while documentation work might not be the most pleasant of activities, the efforts will eventually bear fruit. Tribal knowledge (i.e., not having access to it) and winding up on the wrong side of the bus factor are two potential impediments to group productivity in a project, and the more documentation that is produced for the project, the more the risks of getting impeded can be mitigated, especially in cases where it’s not possible to increase the bus factor via “organic” means (i.e., assigning more people to the project). Points to Ponder How and when will the team evaluate whether a developer with zero knowledge of the project can configure their environment solely by reading the project’s documentation? On projects that are in active development, the answer might always be “no” in the strictest sense of the term, as non-trivial software projects will have enough aspects that eventually *some* part of the documentation will fall out of date. However, it is a worthwhile effort to strive towards as much of a “yes” as possible, lest the mentality of “it’s not possible to keep fully up-to-date, so why bother” sets in and the documentation ages to eventual uselessness. A good exercise here is to provide an experienced developer on the project with a wiped-clean environment and have them attempt to set their environment up via the method described above; they will have the experience to know which gaps in the documentation need to be filled in and where. How and when will the team evaluate whether a developer with zero knowledge of the project can learn how to contribute to the project solely by reading the project’s documentation?Related to the point above, a project’s development process can vary from others in a multitude of ways: What is the project’s branching strategy? What is software development lifecycle for the project? How does one create a task? What types of tasks are there? How are code reviews conducted? Who participates in the code reviews? And so on. While these might be considered innate knowledge for somebody immersed in the project, it will not be so for somebody completely new to the project. A walkthrough with somebody who’s knowledgeable about the project to “show the ropes” will help instill this knowledge in the newcomer, but that a) takes time away from the experienced developer that could be otherwise be used for development, and b) there’s no guarantee that the experienced developer will know precisely which subjects they need to cover. Again, the same exercise for the point above would help: have an experienced developer periodically check the project documentation to make sure that the development process information is up-to-date. Conclusion All of the points listed above are merely guidelines. The circumstances for each software project will be different and hence require different policies for maintaining the project hygiene. Likewise, it is possible that other aspects of project hygiene that have not been mentioned here will be relevant for a particular software project; these should naturally be considered as well. What matters most is the concept of incorporating periodic tasks that re-examine the overall state of the software project and evaluate what steps can be taken either to bring the project in good working order or to keep it there. Again, these will not be the most engaging of tasks, but this “boringness” will help prevent “exciting” catastrophes in the future, so get the veggies on the plate, and start checking that documentation!
Corporate giants like Uber, Tesla, and Netflix have proved that the future is going to be an amazing place to be in. They radically disrupted the entire industry in which they operated. Leveraging 'software,’' these behemoths created amazing experiences for their customers. IT leaders are aware of the fact that software that is robust and of better quality can help them deliver amazing digital experiences. To do that, DevOps has become paramount to developing high-quality software as it can help you improve continuous integration, continuous delivery, continuous testing, continuous monitoring, and continuous feedback. Value stream mapping can assist you as you need metrics for continuous feedback. Exploring Value Stream Management in Detail Software delivery is getting more challenging every day while becoming one of the critical success factors. This makes it difficult for organizations to maintain high performance and an edge in the market. To effectively manage value streams, organizations need to get proper visibility and control of various interconnected value streams across the portfolio. Moreover, modern CIOs are responsible for yielding more business value with a few resources. As a result, it isn't easy to align business goals with IT work, fast-track software delivery processes, and accelerate quality. This is where the role of value stream management comes in. Value streams incorporate everything in the SDLC, from ideation to production, that is required to deliver products or services to customers. It is designed to put an end to operational silos. Instead, it develops effective connections across the major teams, tools, and processes to deliver a top-notch quality product. As a result, we can say that it can effectively drive efficiency in software delivery. Like what we saw above, value stream management allows software development organizations to effectively and efficiently transform ideas into customer value. Your organization can begin to witness benefits as soon as you execute a value stream management approach. You can: Set more realistic goals. Address business constraints better. Identify and eliminate waste. Get rid of bottlenecks and silos. Achieve visibility, traceability, and transparency. Fast-track software delivery. Leverage real-time metrics. Improve product quality. Emphasize results and KPIs. Coordinate and automate workflows. Understand opportunities for automation. Bring tremendous business value. Embed governance into SDLC. Foster cross-functional collaboration. Connect multiple processes, teams, and tools. Lay a foundation for a long-term improvement plan. Streamline communication between business and IT. Not just that, with value stream management: Executives can make data-driven investments to enrich customer experience. Product managers can take strategic and optimized decisions. Release managers can constantly monitor the existing state of the product releases. Test environment managers can gain insights to improve cost efficiency. Dev managers can get more transparency in the development processes. How Does VSM Maximize the Output of DevOps? DevOps teams can utilize value stream management to effectively identify bottlenecks and pain points, manage errors, infuse visibility across the entire cycle, remove redundant processes, boost cross-functional collaboration, identify opportunities for automation, and more. A software value stream includes every task, right from idea to production, to help you deliver products or services to your clients. It can enable your business to deliver superior-quality software at a robust pace while minimizing risk. It maximizes value from customer request to delivery and enables you to improve time-to-market, enhance throughput, and optimize business outcomes. You can develop your own data-driven stream by mapping out your as-is DevOps stream, identifying waste, building your to-be DevOps stream, and communicating change to your organization. Conclusion To conclude, value stream management is basically a shift in the mindset. Value stream mapping is the first step to getting better visibility into your pipeline, and it is the key to making that visibility lead to change. It will be a lot easier for you to make strategic decisions when every concerned stakeholder has access to the same data. With that, organizations are managing their transformations as it is the first step to injecting better visibility into your pipeline and is the first stop to DevOps success. As a matter of fact, it can also be considered the next evolution in DevOps.
Last year, I enjoyed reading a new and provocative book that explores a new way to think not only about some aspects of life but about everything. Essentialism: The Disciplined Pursuit of Less by Greg McKeown brings this new perspective, but is it right? Can we apply it to software development? The short answer is: yes. Furthermore, you can do it more efficiently, and in this post, we'll explain how with the essentialism methodology. When we talk about software engineering, it might have several approaches and definitions. This post will use my favorite one from Modern Software Engineering by Dave Farley. "Software engineering is the application of an empirical, scientific approach to finding efficient, economic solutions to practical problems in software." Where efficiency is not an optional part of the process of software development, and when we talk about efficiency, we also include avoiding the waste of several resources on it such as powerful computers and people's time. That is where essentialism helps in the process, mainly because it starts with three core principles: Individual choice: We can choose how to spend our energy and time. The prevalence of noise: Almost everything is noise, and very few things are precious. The reality of trade-offs: We can't have it all or do it all. If you still don't follow me, let's take a step back and talk about the enemies of good software development. The Obstacles to the Software Process Last year, Forbes enumerated sixteen obstacles to creating promising software in 16 Obstacles To A Successful Software Project (And How To Avoid Them). We will highlight some as follows: Hyper-focused planning and design Unclear or undefined client expectations Overlooking nonfunctional requirements Not aligning early on the "must-haves" Does it sound like a paradoxical problem? How can we spend a lot of time planning something we don't understand? We're putting energy into something that does not help us to solve the problem or to boost the right target. That is issue number one: how to spend time and energy on the correct problem and spend energy on what matters. That is where software development needs essentialism. We need to find the highest point of contribution: the intersection of the right thing, time, and reason. The highest point of contribution Clarity is part of the process of archive success in software development, and the power of choice is crucial for it. The choice is an action that defines where you'll put the energy. Being optimal of those options can help you get close to your objective. The energy on essentialism vs. no-focus The action of choice is strict, mainly because we need to think more about the yes, including the several negatives we need to say. It was one of the most challenging points of my life; initially, I related the "yes" with open opportunities like the Yes Man movie. It is a battle between the fear of missing out (FOMO) and focus. It might impact you as well. I recommend trying it out and see if you are like me: that the "no" gives you freedom. Again, it is not easy. I still fight it, but find it necessary - actually, essential. As a software engineer, saying no also includes avoiding the newest technology in the market because this ally might become an enemy and make things more complicated than necessary. Essentialism Is Saying “No” To Many Things, Including Over-Engineering When we compare the software development tools with ten years ago, we have many options that are supposed to make our life easier, including methodologies, frameworks, and tools such as Kubernetes, microservices, etc. But why do I use the word "suppose"? With many tools, it gives us options that make our life harder, and exploring the wrong tool makes things more complex (see "Complexity is killing software developers" by Scott Carey). In software development, there is no silver bullet: accordingly, it is ok not to use microservices, Kubernetes, and even-driven and hexagonal model architecture all the time. When we talk about microservice, it is only possible to think about the two books on the topic by Sam Newman. But even he wrote "Should I use microservices?" where he does not recommend for use on a new product. The topic here is not to blame new trends or technologies. Those are amazing and help at the proper time. The first question is how far those are from our goals. Essentialism can help us go for it with simplicity. The Bare Necessities When we talk about software decisions and design, we think about software architecture that has two laws: The why is more important than the how. Everything has trade-offs. Once you find the goal and where you want to put your and your team's energy, simplicity can help decrease unnecessary software development risk. I'm not against innovation: I love it, but exploring the wrong tool can impact you to go to the flawed site. Starting not using those new and fancy techs and exploring more straightforward and fast ways to go, such as monolith or not using Kubernetes, might be a good solution for the beginning. Being adaptive is the fundamental part of the Agile process, and software on production is part of it. Remember, not starting with microservices does not mean not using them shortly; you can explore them when necessary. It also happens with the hexagonal model. Do you need all of those layers to start? Let's start with three, such as an MVC. Start simple and evolute your software and architecture: it is part of evolutionary architecture. The software needs to be adaptative and not see the future. In summary, looking for the bare necessities is a good choice if your software architecture has a song. We explored the architecture decision, but regarding methodologies, we also have human interactions, which include meetings. Let's talk about them as well. Saying "Hasta la Vista, Baby," To Endless Meetings When we talk about software development, we need to include the more worthwhile resource, primarily because we cannot get it back once we waste it: time. When we talk about time, on the one hand, we have over-engineering; on the other, meetings, meetings, and more meetings. We must remember that it is a group of people stopping to follow a goal. Save people time by making sure the prerequisites of the meeting are in the description and exploring the meeting notes to register the decision. Please pay attention to the time of the sessions. There is Parkinson's Law that says work will expand to fill the time allotted for its completion. Summary When we talk about software development, ensuring that you're doing the right thing at the right time is crucial for the key to delivering. Essentialism is a great partner to make it happen, mainly because it decreases the chance of overengineering, endless meetings, and waste. Even categorized as a non-IT book, "Essentialism: The Disciplined Pursuit of Less" helps give guidance to meet simplicity and trust more what an uncomplicated solution work once a smooth software is a software with less risk of misunderstanding and becomes a legacy code faster than usual. I hope that you enjoy the book as I do.
The "lost art of software design" refers to the idea that the principles and practices of software design are not given the attention and importance they deserve in the software development process. This can result in poorly designed systems that are inflexible, difficult to maintain, and prone to errors and bugs. There Are a Number of Factors That May Contribute to the "Lost Art" of Software Design Time Pressures In many cases, software developers are under pressure to deliver new features and functionality as quickly as possible. Unfortunately, this can lead to a focus on speed and efficiency at the expense of design quality. Lack of Training Some software developers may not have received formal training in software design principles and practices or may not have had the opportunity to learn from more experienced developers; for example, developers are often considered full-stack developers and sometimes result in poor UI code development. Complexity As software systems become more complex, it can be not easy to design them in a way that is flexible, maintainable, and scalable. This can lead to design shortcuts and poor design decisions. Agile Development While agile development can be an effective approach for delivering software quickly, it can also lead to a focus on short-term goals and a lack of emphasis on long-term design quality. To address the "lost art" of software design, it is important for software developers to prioritize design in the software development process and to make time for design activities such as architecting and design reviews. It is also important to invest in training and education to ensure that developers have the skills and knowledge they need to design high-quality systems. Finally, it is important to adopt design practices and principles that promote flexibility, maintainability, and scalability, such as modular design, loose coupling, and separation of concerns. In contrast, evolutionary architecture is a design approach that emphasizes the importance of continuously evolving and improving the architecture of a system over time. It is based on the idea that software systems are constantly changing and that it is impossible to predict all of the requirements and constraints that a system will face in the future. One of the key benefits of an evolutionary architecture is that it allows a system to adapt and change as the needs of the business and the technology landscape evolve. By building flexibility and evolvability from the start, it is possible to create a system that is able to withstand the constant changes and uncertainties of the software development process. This is achieved through principles such as incremental development, modular design and deployments, continuous integration and deployment, and the use of microservices. Evolutionary Architecture Is Based on a Number of Key Principles Incremental Development Rather than trying to design the entire system upfront, an evolutionary architecture takes an incremental approach, building and delivering small, manageable chunks of functionality over time. This allows the architecture to evolve and adapt as the system grows and changes. Modular Design An evolutionary architecture emphasizes the importance of modularity, breaking the system down into smaller, independent components that can be developed and tested separately. This makes it easier to evolve and modify the system without affecting other parts of the system. Continuous Integration and Deployment In an evolutionary architecture, it is important to have a fast and reliable process for integrating and deploying code changes to the system. This can be achieved through practices such as continuous integration and continuous delivery. Microservices An evolutionary architecture may also make use of microservices, which are small, independent services that can be developed and deployed separately. This allows the system to evolve and adapt more easily, as changes can be made to individual microservices without affecting the entire system. Evolvability An evolutionary architecture should be designed with evolvability in mind, with a focus on creating a flexible and adaptable system that can accommodate changing requirements and technology over time. This may involve the use of design patterns and principles such as loose coupling, high cohesion, and separation of concerns. By adopting an evolutionary architecture, software developers can avoid the pitfalls of the "lost art" of software design and create systems that are flexible, adaptable, and able to evolve over time. This can help ensure that the system remains valuable and relevant in the face of changing requirements and technology rather than becoming obsolete or inflexible. In summary, the "lost art" of software design refers to the idea that software design is not given the attention it deserves in the development process, resulting in poorly designed systems. Evolutionary architecture is an approach that addresses this problem by emphasizing the importance of continuously evolving and improving the architecture of a system over time. By adopting principles such as incremental development, modular design, continuous integration and deployment, and the use of microservices, it is possible to create a system that is flexible, adaptable, and able to withstand the constant changes and uncertainties of the software development process.
Berlin Product People GmbH
Software Development Manager,