From Idea to Reality - The Need for Agility
How does one transform a software development idea into a tangible, usable feature? Which tools facilitate this process?
These questions often spark engaging discussions in my training sessions, particularly among seasoned engineers. Imagine receiving a client requirement; how much time does it take to transform this requirement into a usable product or feature for the client?
We understand the time to transition from "idea" to "reality" varies depending on the complexity of the feature. However, consider a scenario where your team is working on a video conferencing application, similar to Google Meet or Zoom. The requirement is to alter the text in a window from "Join Now" to "Join now". Once the change is approved and the file for alteration is identified, it takes barely a minute to implement. So, in such a scenario, how long is the transition from "idea" to "reality"?
I often receive varied answers ranging from one day to six months. The key question here is, who determines if this timeframe is acceptable?
As a developer or project manager, you might assume that you or your executive team hold this decision-making power. However, the true decision-makers are your end users.
The acceptance or rejection of your product hinges on your users' satisfaction. In today's fast-paced world, users anticipate swift and frequent modifications to applications that cater to their evolving needs. They expect product agility. If a user encounters a bug in your product, how long they are willing to wait for a fix depends on their patience and available alternatives. If a more agile, competitively-priced alternative appears, your users might switch. Therefore, to retain your users, your agility should surpass your competitors.
Agility, at its core, represents your capacity to swiftly cater to customer needs.
Let's delve into the various phases of the software development process. Here is an overview:
Plan
The software development process is an intricate journey from idea to reality. It involves transforming a concept, often initially just a written proposal, into a functional application that end-users can utilize in a production environment.
This process begins with planning. An idea may be ambitious, and while it may not be feasible to accomplish in the short term, we can identify a smaller, achievable subset. This concept, often called the Minimum Viable Product (MVP) in startup jargon, is the minimum set of features that add value to end-users, thus addressing a specific problem. It is essential to create a list of tasks for this MVP.
Software as a Service (SaaS) companies, such as Gmail, Facebook, Twitter, or Uber, often have expansive visions. For instance, a ride-hailing company might envision providing autonomous flying vehicles that pick customers up from their homes and deliver them to their destinations. While this may not be feasible with current technology, it addresses a clear need: transport from point A to point B. Despite the limitations of today's technology, it's crucial to explore alternative solutions that can meet this need. Instead of flying vehicles, can we have cars and instead of autonomous vehicles, can we have human drivers and still meet the need?
Startups often grapple with the uncertainty of their grand vision's validity - whether there is a genuine need or is it just wishful thinking? The concept of the MVP plays a crucial role here, providing a lean method to validate the need by creating a product with only the essential features. The goal is to minimize wastage of resources if the need is non-existent.
From this grand vision, a small part is taken, and a concrete plan is formulated. This plan comprises specific tasks to be completed within a defined timeframe, often referred to as a milestone. The objective is to ensure these tasks are completed by the end of the milestone, improving the users' experience with the product or service. It's not advisable to work on features that may only be used years down the line unless the organization can afford to invest in such speculative projects.
Planning and execution require the use of project management tools. Here are some popular tools for Project Management:
- Jira
- Monday
- Microsoft Project
- Asana
- Basecamp
- Trello
These tools are employed to capture tasks, assign them to developers, and establish milestones. Developers are then able to estimate the completion time of each task, assess dependencies, and determine the feasibility of the milestones.
Upon finalizing the plan, developers commence work on the assigned tasks. The procedure of capturing tasks, delegating them to developers, and associating them with particular milestones is a crucial determinant of the project's success.
Thus, the planning phase forms the first step in the software development process.
Development
The software development lifecycle proceeds to the development phase after planning. Here, various teams create artifacts pertinent to their roles. These artifacts can include:
- source code crafted by developers
- images produced by the design team
- test cases formulated by the testing team
- user interfaces built by client-side developers
- ...
Development activity primarily takes place within a specialized setting known as the development environment. This environment hosts a comprehensive suite of tools required for development, such as compilers, build tools, test utilities, source-code quality checkers etc. Often, it also features an Integrated Development Environment (IDE) that provides capabilities for code writing, testing and debugging. Examples of widely-used IDEs include Eclipse, IntelliJ, and Visual Studio Code. The development environment can be a local system like a laptop or an online cloud environment set up by the organization.
Version Control Systems, Code Repositories and Code Review
After the code writing phase, the generated artifacts are stored in a Version Control System (VCS). VCS code repositories such as GitHub, GitLab, and Bitbucket, provide centralized repositories for your code, making collaboration among developers on the project easier. It's important to note that the repositories can store more than just source code; they can also contain user-facing components like images. This organized method of storing and collaborating is crucial for successful project development.
Developers, upon completing their artifacts, push them into the Version Control System. Storing source code on an individual's laptop is not advisable. If the laptop fails, the invaluable source code could be lost. Therefore, it's important to ensure that the code is stored securely and durably in a designated place - a code repository.
A code repository offers various advantages. It stores the code in a durable manner(by ensuring that the code pushed has been replicated for safety). It also enables multiple developers to work together on the codebase. Additionally, code repositories provide tools for code reviews, an essential step in the development process.
Version control systems offer more than just centralized code storage. For instance, GitHub, GitLab, and Bitbucket include features for creating issues, planning sprints, and handling feature requests. These features allow developers to work together on specific tasks. They also include Kanban-like boards for easy task management and project workflow tracking. Additionally, these platforms provide developer environments where code can be written directly within the browser.
Code reviews are usually conducted through code repositories like GitHub or Bitbucket. Many organizations utilize the feature branch workflow model, where developers create a separate branch for each new feature or task. Direct code commits to the mainline branch are avoided in this model.
After developers commit their code to the repository, they initiate a pull or merge request. This request is then reviewed by another engineer on the team, who ensures the logic of the code is sound, checks for issues, and evaluates the quality of the code. This practice of code reviews involves team members scrutinizing each other's code prior to its merge into the main branch, ensuring that the code not only adheres to industry standards but also meets the specifications set by the product owner or manager. This early detection of bugs during the review process helps minimize issues later in the development cycle. Additionally, most version control systems, such as Github and Gitlab, offer tools that facilitate code reviews, allowing team members to provide feedback on pull requests and discuss code modifications before they are integrated into the mainline branch.
Build
The code, once added to a Version Control System (VCS), often undergoes a code review process. Upon the reviewers' satisfaction, the code is ready to be transformed into a build.
Developers check in their code into VCS, and once the code review process completes, the build system springs into action. It retrieves the code from the VCS and initiates build jobs. Typically, the build system is alerted to any new commits made to the mainline branch through webhook triggers from the code repository.
The role of build jobs is essential as they convert source code into built artifacts, primed for deployment across various environments. More often than not, the source code cannot be deployed as is, particularly for languages such as Java and Go that require a compiler for conversion into a deployable form.
Even in the case of interpreted languages like Python and JavaScript, build systems play a pivotal role. They facilitate tasks like dependency download, code packaging, test execution, quality checks, code optimization (minification), and transpilation, among others.
The nature of these build artifacts hinges on the programming language in use. For instance, in the case of Java, the build job transforms a set of .java
files (input) into a .jar
file (output), which houses compiled Java code in the form of .class
files. Conversely, for Go, .go
files serve as the input to generate a compiled executable as the output. If the source code supports Docker, a Dockerfile
may be used to package the entire codebase into a single Docker image.
Build automation can be achieved using tools such as Jenkins and GitLab. But how does the build automation tool discern when a new commit is made to the VCS and that it needs to initiate a new build?
Builds can be triggered in three ways:
- Manual - An individual manually logs into the build automation system and triggers a new build.
- Periodic - The build automation system can be configured to automatically run a build at set intervals - hourly, daily, etc.
- Webhook Triggers - A webhook trigger, a HTTP endpoint invoked by the VCS, can be configured to alert the build system about any new commits. This method offers the most efficient approach to build automation.
The build system is aware of the artifact repository location and is responsible for uploading the build artifacts to the repository.
Artifact Repository
Once the artifact is built by the build system, it is stored in a specialized repository, akin to how raw source code is stored in a code repository. They store different versions of the built artifacts, which are then tested by the Quality Assurance (QA) team and once the QA completes, the artifacts are deployed to production.
Notable examples of artifact repositories include:
- JFrog Artifactory
- Nexus Repository OSS
- Docker Registry (which is also regarded as an artifact repository for Docker Images)
Quality Assurance
Quality Assurance (QA) can be performed at various stages of the Software Development Lifecycle. Typically, it commences once a new build becomes available in the artifact repository. If the QA team identifies any issues within the build artifact, they can raise new issues, leading to a reiteration of the entire process spanning from planning to development, then to building and eventually, to the artifact repository.
Once the QA team is satisfied with the quality of the artifact, the artifacts are deemed ready for deployment in the production environment.
With the new artifact available in the repository, a round of QA can be conducted. In recent times, many organizations have begun to automate their QA process. In such a context, QA can be considered a part of development itself, akin to how the development team writes source code, and the QA builds test cases for those codes. If viewed from this perspective, QA can be integrated as a part of the development process. However, if there's a separate manual process, QA has to be undertaken once the new artifacts become available.
The Quality Assurance (QA) team conducts their testing in a distinct QA environment. This environment is essentially a clone of the production environment, designed to closely replicate its conditions. This allows the QA team to identify and rectify potential issues under conditions that closely resemble the actual production settings, thereby enhancing the reliability of the testing process.
Deployment
The deployment phase is a crucial stage in the Software Development Lifecycle (SDLC) where the thoroughly tested application is made accessible to the end-users in a "production environment". We may use tools like Jenkins, ArgoCD etc to automate and manage this process.
Operations
The Operations stage in the SDLC, commonly referred to as 'Ops', is focused on maintaining and supporting the live application in the production environment. Tasks during this phase include handling failures, handling scalability/availability issues, troubleshooting, software updates, data backups, running security checks etc. A well-managed application not only elevates the user experience but also mitigates the risk of unexpected system downtime. Is everything running smoothly? If not, we need to take corrective actions quickly. This is what we call operations.
Analytics
Evaluating the performance of an application for end users often presents a dilemma since it's not practical to individually consult every user. This challenge is addressed through the application of Analytics. The Analytics phase involves the extraction, interpretation, and application of data to inform business decisions and strategies. Tools such as Google Analytics and Kissmetrics can deliver vital insights into areas like user behavior, application usage, feature popularity, and potential areas for improvement. The Analytics phase, utilizing techniques from basic usage statistics to advanced predictive and prescriptive analytics such as machine learning, aids in anticipating trends and suggesting actionable steps. The insights derived from this phase are invaluable for refining future iterations of the application to better align with user needs and preferences. These insights subsequently feed back into the Planning phase for the next software development cycle, exemplifying why the software development process is deemed a "Cycle".
Subsequent Planning phases will incorporate a fresh set of tasks guided by our vision, bugs identified in the previous cycle, and the intelligence gathered from the Analytics tools. This continual input and feedback loop illustrates the cyclical nature of software development, emphasizing the importance of iterative improvement and refinement in meeting end-user needs.
Continuous Integration and Continuous Delivery (CI/CD)
Have you ever pondered the process through which software development teams progress from source code to production? This article aims to elucidate CI/CD, a key aspect of modern software delivery processes.
In essence, CI/CD is the combination of two crucial elements: continuous integration and continuous delivery (or deployment). We will explore each of these components and understand how tools such as Jenkins facilitate their implementation.
What is CI?
Continuous integration is the first component. "Integration" here signifies the amalgamation of code contributed by multiple developers, ensuring it works in harmony. With continuous integration, any change a developer introduces into the codebase is immediately incorporated into the main codebase, triggering automated build and tests. This level of automation in the integration process enables the early identification and resolution of potential bugs or issues, minimizing wastage (time spent in writing code that never ends up being used by end users) and enhancing the overall stability and efficiency of the development process.
What is CD?
Following a stable codebase achieved via continuous integration, we progress to continuous delivery (or deployment). This phase involves taking the built artifact, such as a Docker image, and deploying it into production environments to be accessible by end users. But who is responsible for the deployment?
The deployment process is facilitated by CD tools. These tools automate the deployment process, ensuring consistent and correct application deployment. Ideally, once the Quality Assurance (QA) team has given the go-ahead, the deployment should be an automatic and seamless process.
Jenkins as a CI/CD Tool
Let's shift our focus to Jenkins, an open-source automation server that can be utilized for both continuous integration and continuous deployment. Jenkins enables teams to establish pipelines that automate the process of builds and deployments, thus enhancing the speed and reliability of software delivery.
In the realm of CI/CD, the importance of automation can't be overemphasized. Automation, both in the integration and deployment processes, liberates developers to concentrate on their main task: writing high-quality code.
In conclusion, CI/CD combines the principles of continuous integration and continuous delivery, and tools like Jenkins play a vital role in automating this process, thereby enhancing efficiency. Through automation, we can minimize the risk of bugs and downtime, accelerating the deployment of our applications with increased confidence and speed.
An Introduction to DevOps
Relationship of SDLC with DevOps
DevOps is a term closely associated with the Software Development Lifecycle (SDLC) and has gained significant traction in recent times. When you search for DevOps in Google Images, you will come across a symbol ♾️ that represents the DevOps lifecycle.
In most of these images, the DevOps lifecycle is depicted as a continuous cycle, starting with planning and progressing through various phases such as coding, building, testing, releasing, deploying, operating, and monitoring. This cycle then loops back to planning. Notably, some of these phases, from planning to testing, are considered as "Dev," while the remaining phases, from release to operation, are categorized as "Ops."
Interestingly, this depiction bears resemblance to the earlier discussed "Idea" to "Reality" diagram in the context of the SDLC. Both portray continuous cycles, highlighting the iterative nature of software development. Although the concepts are similar, they are represented differently, showcasing the specific nuances of each model.
DevOps: An Evolutionary Journey
Let me tell you a story. During the economic downturn of 2008-2009, a financial crisis had left startups with limited resources. Among these startups was one where a man named Alex worked as a developer. With funding scarce, the company had to leverage its existing resources. The resulting strategy was a blend of development and operations tasks, unknowingly adopting practices that were the foundations of what later became known as DevOps.
Originally, Alex's expertise was solely in development, immersed in a world of codes and software frameworks. However, company-wide layoffs due to the crisis thrust him into a new, unfamiliar role - that of an operations engineer. Rather than buckling under the pressure, Alex chose to embrace this new challenge, diving into various operational tasks ranging from mitigating security attacks to troubleshooting hardware failures.
In navigating these new waters, Alex found himself living the core principles of DevOps. Shared responsibilities became the norm, and he began taking ownership of the end-to-end process, effectively breaking down the once-rigid boundaries separating the roles of developers and operations engineers. This evolution highlighted the critical shift in mindset DevOps represents - an embodiment of collaboration and synergy between these traditionally separated roles, working together as integral parts of a unified whole.
Looking back, Alex's unexpected transition into operations proved to be an enriching journey. It wasn't just about acquiring new skills or expanding his role within the company. It was a revelation of how the seamless integration of development and operations could lead to remarkable efficiency and better end products - a true testament to the essence of DevOps.
Understanding the Needs for DevOps
In many organizations, a common challenge arises when the developer and operations teams function as distinct entities, operating in isolation. The developers focus on building and deploying applications, while the operations team struggles to troubleshoot and resolve issues that emerge in the production environment. Often, the operations team is limited to sharing logs with the developers, lacking the necessary expertise to address the problem themselves. Compounding the issue, developers are frequently denied access to the production environment, leaving them unaware of the root cause of the problem or the specific application version in use. This disconnect between teams can result in confusion and hinder the efficient resolution of issues, ultimately impeding organizational agility.
To bridge this gap and promote a more collaborative and cohesive approach, the adoption of DevOps culture becomes crucial. DevOps is a mindset and set of practices that unite the development and operations teams within an organization. By embracing DevOps, organizations can foster improved communication, shared knowledge, and streamlined processes between developers and operations professionals. This cultural shift enables both teams to work seamlessly together, enhancing efficiency and productivity throughout the software development lifecycle.
The benefits of adopting a DevOps culture are manifold. For developers, it means gaining valuable insights into the production environment, enabling them to identify and resolve issues promptly. Additionally, it allows for greater transparency regarding the application version deployed, facilitating more effective troubleshooting. On the other hand, operations teams benefit from the expertise and collaboration of developers, ensuring quicker and more accurate problem resolution. Overall, DevOps culture promotes synergy and alignment between development and operations, resulting in enhanced agility, improved software quality, and a better experience for end-users.
The core objective of DevOps lies in dismantling the traditional silos that separates development and operations, fundamentally transforming them from disparate teams with minimal interaction into a cohesive and interconnected unit. By proactively breaking down these silos, developers work in close collaboration with operations engineers, sharing responsibilities encompassing application development, deployment, and continuous monitoring. This holistic approach ensures a seamless and comprehensive ownership of the end-to-end process, culminating in optimized outcomes and elevated operational efficiency.
The Benefits of DevOps
DevOps brings several key benefits to organizations. Primarily, it enhances efficiency through improved communication and understanding of the software development process between development and operations teams. This collaboration reduces confusion, improves agility and elevates software quality. Furthermore, it enables a quick response to customer feedback, enhancing user experience and ensuring delivery of the best product or service.
From a broader perspective, DevOps accelerates software delivery, facilitates frequent releases with fewer defects, and ensures a faster time-to-market. It cultivates a culture of continuous learning and improvement, enabling teams to adapt swiftly to changing business requirements and customer needs. DevOps also mitigates risk and improves security by integrating security measures throughout the software development lifecycle, leading to early identification and resolution of vulnerabilities. Lastly, it promotes collaboration and teamwork among all stakeholders, aligning them towards a common goal and facilitating collaborative problem-solving.