Data Engineering: The industry has come a long way from organizing unstructured data to adopting today's modern data pipelines. See how.
[Last call] What metrics does your organization use to measure success? MTTR? Frequency of deploys? Other? Tell us!
The latest and popular trending topics on DZone.
Artificial intelligence (AI) and machine learning (ML) are two fields that work together to create computer systems capable of perception, recognition, decision-making, and translation. Separately, AI is the ability for a computer system to mimic human intelligence through math and logic, and ML builds off AI by developing methods that "learn" through experience and do not require instruction. In the AI/ML Zone, you'll find resources ranging from tutorials to use cases that will help you navigate this rapidly growing field.
Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Open source refers to non-proprietary software that allows anyone to modify, enhance, or view the source code behind it. Our resources enable programmers to work or collaborate on projects created by different teams, companies, and organizations.
Full-Stack Security Guide: Best Practices and Challenges of Securing Modern Applications
Benchmarking OpenAI Models for Automated Error Resolution
CI/CD and Its Importance We all know what CI/CD is and how it fosters a sense of collaboration among teams and enables them to deliver high-quality software efficiently and reliably. By automating the integration, testing, and deployment processes, CI/CD helps maintain code quality, reduce manual effort, and provide continuous feedback, ultimately leading to faster and more reliable software delivery. CI/CD is important for the following reasons: Enhanced Code Quality CI/CD allows for frequent testing and integration, catching issues early in the development cycle. This helps maintain higher code quality and reduces the likelihood of bugs reaching production. Faster Time to Market CI/CD streamlines testing and deployment, ensuring swift and reliable feature delivery. Reduced Manual Effort Using automation in CI/CD reduces the need for manual intervention and human error. This enables developers to concentrate on more important tasks. Improved Collaboration With CI/CD, team members can work on different features simultaneously and merge their code changes frequently. This encourages better collaboration and communication within the team. Consistent Environments CI/CD pipelines can include automated processes to create consistent and reproducible development, testing, and production environments. This ensures that the code runs as expected across different stages. Continuous Feedback CI/CD provides continuous feedback to developers through automated testing and monitoring, helping them to understand the impact of their changes quickly and make necessary adjustments. Increased Reliability and Stability CI/CD reduces the risk associated with each deployment by deploying smaller, incremental updates rather than large, monolithic releases. In order to take full advantage of all the above-mentioned advantages of CI/CD, it is very important that the CI/CD pipeline is optimized. We will discuss the important aspects of optimizing a CI/CD pipeline using Cloud Build. Time To Get into Cloud Build We would not discuss CI/CD when using Google Cloud without mentioning Cloud Build. Cloud Build supports various environments and integrates with various source code repositories, allowing for seamless CI/CD pipelines. Key Concepts Let's talk about key concepts within Cloud Build that make it very effective. Triggers Triggers automate the execution of builds based on specified conditions. They help streamline the CI/CD process by automatically initiating builds when certain events occur or at specified times. The builds can be triggered manually via the Cloud Build UI, CLI, or API without relying on external events, through a webhook to initiate a build in response to events from external systems, such as changes in a source code repository or notifications from other services, or through a scheduled initiation of the build at a specified time similar to cron jobs. Cloud Build triggers allow you to select the event for kicking off the pipeline, a.k.a Build. Some of the most commonly used trigger event types are: GitHub integrated: On a push to a branch On a pull request On a new tag/release creation Manual invocations/other events: Manual runs On a Pub/Sub message (based on a trigger event from other systems) Webhook event (Trigger via API calls) Build Steps Build steps are individual actions executed sequentially as part of the build process, such as compiling code, running tests, and deploying applications. The image below shows an example of build steps. Repository Objects Repository objects encompass the source code and configuration files stored in a version control system (e.g., GitHub, GitLab, Cloud Source Repositories) utilized in the build process (see Cloud Build Repositories for more info). Connections Connections in Cloud Build refer to the integrations between Cloud Build and external version control systems or other services. These connections allow Cloud Build to access the source code and trigger builds based on repository events. GitHub Apps GitHub Apps are applications that can be integrated with GitHub repositories to provide additional functionality. In the context of Cloud Build, GitHub Apps can be used to trigger builds and report build statuses directly within GitHub. Images Prebuilt images: These are standard Docker images provided by Google Cloud or the community that can be used as build steps without additional configuration. Custom images: The user creates these Docker images to carry out specific tasks as part of the build process. Custom images can include all necessary dependencies and configurations for specialized build steps. See Cloud builders documentation for more. Build Config Files Build config files define the build steps and their execution order. They are typically written in YAML or JSON format. Read more at Create a build configuration file. Artifacts and Storage Artifacts: These are files produced by the build process, such as compiled binaries, Docker images, or test results. Artifacts can be stored and retrieved for further use or deployment. Storage: Cloud Build can store artifacts in Google Cloud Storage (GCS) or Google Container Registry (GCR). GCS is used to store general files, while GCR is specifically used for Docker images. Optimization Techniques for Cloud Build CI/CD Even though Cloud Build offers many key concepts and greatly simplifies CI/CD, we still need a few optimization techniques to achieve excellence in this area. Let us categorize the optimization techniques into the following: Speed and Efficiency We will explore the elements that enhance the speed and efficiency of the CI/CD pipelines. Caching Utilize caching to store and reuse previously built artifacts or dependencies, reducing build times. Docker layer caching: Cache Docker image layers to avoid rebuilding unchanged layers. Dependency caching: Cache dependencies to speed up subsequent builds. As you can see in the screenshot above, a rebuild happens on the components that have changed from the previous build, making it efficient and utilizing caching in this process. Parallel Steps Execute build steps in parallel whenever possible to reduce overall build time. Docker Image Optimization Unwanted installs: Remove unnecessary packages and files from Docker images to reduce size and build time. Dependency management: Use multi-stage builds to keep final images lightweight by including only necessary dependencies. Resource Allocation We must allocate appropriate resources (CPU, memory) to ensure optimal performance when building steps. We can do so by specifying resource limits and requests in the build config. Reliability Reliability and maintainability are other important aspects of CI/CD that, if worked on diligently, can add significant value. Build Stages Break larger builds into smaller manageable stages by using multiple build steps and conditional execution to split tasks. Error Handling Implement conditionals to handle different scenarios within the build process. Monitor exit codes to determine the success or failure of build steps. Ensure that builds fail gracefully and notify relevant stakeholders. Security Ensuring security in CI/CD is critical for protecting sensitive information and maintaining application integrity. Secrets Manager Injection Securely manage and inject sensitive information (e.g., API keys, passwords) into the CI/CD pipeline using tools like Google Cloud Secret Manager. Implementing this measure effectively safeguards sensitive data from unauthorized access and significantly minimizes the risk of leaks. In the previously outlined scenario, it is notable that until the deployment stage, the containers do not possess access to any secret values. They solely reference an environment variable under the assumption that it will be available during runtime. The utilization of the "--update-secrets" flag ensures that secret values tagged as version 1 from the "openai_api_key" and "openai_org_id" secret manager entries are appropriately assigned to their corresponding environment variables. This procedural approach mitigates the risk of inadvertent secret exposure. Image Vulnerability Scans Scan Docker images for vulnerabilities before deployment to identify and mitigate security vulnerabilities early, preventing compromised software from reaching production. This is a built-in feature of Artifact Registry. Integrations in Cloud Build Another important aspect of a CI/CD tool is its efficiency in integrating with other tools and processes to enhance various aspects of release management. Infrastructure as Code: Terraform Integrating Terraform with Cloud Build enables automated and consistent infrastructure deployment alongside your application code. It also ensures reproducible and consistent infrastructure setups, simplifies infrastructure management, and allows for version-controlled infrastructure code. Compliance (SonarQube, FOSSA, Checkmarx) The important aspect of optimizing CI/CD is integrating compliance tools with Cloud Build. SonarQube: Static code analysis for identifying code quality issues FOSSA: License compliance and vulnerability scanning Checkmarx: Static Application Security Testing (SAST) for identifying security vulnerabilities Integrating the above tools will massively help increase code quality, security, and licensing compliance. Substitutions (User Subs, Dynamic Subs, Secret Manager Subs, Trigger-Based Subs) Cloud Build offers a wide range of substitution options for allowing users to make substitutions during various stages of their builds depending on their DevOps practices. Here are a few: User substitutions: User-defined key-value pairs under the substitution flag, which can be re-used at any build stage Default substitutions: By default, Cloud Build offers a wide range of substitution values, from Project ID, Region, and Location to Trigger Name, Commit SHA, and so on. See the full list here. Learn more about substitutions here. Conclusion In conclusion, optimizing and securing your Cloud Build pipeline is crucial for delivering high-quality software quickly and reliably. By leveraging techniques such as caching, parallel steps, Terraform for IaC, and integrating security measures like secret management and vulnerability scans, you can build a robust and efficient CI/CD process. These strategies enhance speed and efficiency and ensure that your deployments are secure, compliant, and resilient, positioning your development team for sustained success. Learn more about various Cloud Build features here.
You might have used large language models like GPT-3.5, GPT-4o, or any of the other models, Mistral or Perplexity, and these large language models are awe-inspiring with what they can do and how much of a grasp they have of language. So, today I was chatting with an LLM, and I wanted to know about my company’s policy if I work from India instead of the UK. You can see I got a really generic answer, and then it asked me to consult my company directly. The second question I asked was, “Who won the last T20 Worldcup?” and we all know that India won the ICC T20 2024 World Cup. They’re large language models: they’re very good at next-word predictions; they’ve been trained on public knowledge up to a certain point; and they’re going to give us outdated information. So, how can we incorporate domain knowledge into an LLM so that we can get it to answer those questions? There are three main ways that people will go about incorporating domain knowledge: Prompt engineering: In context learning, we can derive an LLM to solve by putting in a lot of effort using prompt engineering; however, it will never be able to answer if it has never seen that information. Fine-tuning: Learning new skills; in this case, you start with the base model and train it on the data or skill you want it to achieve. And it will be really expensive to train the model on your data. Retrieval augmentation: Learning new facts temporarily to answer questions How Do RAGs Work? When I want to ask about any policy in my company, I will store it in a database and ask a question regarding the same. Our search system will search the document with the most relevant results and get back the information. We call this information "knowledge”. We will pass the knowledge and query to an LLM, and we will get the desired results. We understand that if we provide LLM domain knowledge, then it will be able to answer perfectly. Now everything boils down to the retrieval part. Responses are only as good as retrieving data. So, let’s understand how we can improve document retrieval. How Do We Search? Traditional search has been keyword search-based, but then keyword search has this issue of the vocabulary gap. So, if I say I’m looking for underwater activities but the word "underwater" is nowhere in our knowledge base at all, then a keyword search would never match scuba and snorkeling. That’s why we want to have a vector-based retrieval as well, which can find things by semantic similarity. A vector-based search is going to help you realize that scuba diving and snorkeling are semantically similar to underwater and be able to return those. That’s why we’re talking about the importance of vector embedding today. So, let’s go deep into vectors. Vector Embeddings Vector Embeddings takes some input, like a word or a sentence, and then it sends it through through some embedding model. Then, you get back a list of floating point numbers and the amount of numbers is going to vary based on the actual model that you’re using. So, here I have a table of the most common models we see. We have word2vec and that only takes an input of a single word at a time and the resulting vectors have a length of 300. What we’ve seen in the last few years is models based off of LLMs and these can take into much larger inputs which is really helpful because then we can search on more than just words. The one that many people use now is OpenAI’s ada-002 which takes the text of up to 8,191 tokens and produces vectors that are 1536. You need to be consistent with what model you use, so you do want to make sure that you are using the same model for indexing the data and for searching. You can learn more about the basics of vector search in my previous blog. Python import json import os import azure.identity import dotenv import numpy as np import openai import pandas as pd # Set up OpenAI client based on environment variables dotenv.load_dotenv() AZURE_OPENAI_SERVICE = os.getenv("AZURE_OPENAI_SERVICE") AZURE_OPENAI_ADA_DEPLOYMENT = os.getenv("AZURE_OPENAI_ADA_DEPLOYMENT") azure_credential = azure.identity.DefaultAzureCredential() token_provider = azure.identity.get_bearer_token_provider(azure_credential, "https://cognitiveservices.azure.com/.default") openai_client = openai.AzureOpenAI( api_version="2023-07-01-preview", azure_endpoint=f"https://{AZURE_OPENAI_SERVICE}.openai.azure.com", azure_ad_token_provider=token_provider) In the above code, first, we will just set up a connection to OpenAI. I’m using Azure. Python def get_embedding(text): get_embeddings_response = openai_client.embeddings.create(model=AZURE_OPENAI_ADA_DEPLOYMENT, input=text) return get_embeddings_response.data[0].embedding def get_embeddings(sentences): embeddings_response = openai_client.embeddings.create(model=AZURE_OPENAI_ADA_DEPLOYMENT, input=sentences) return [embedding_object.embedding for embedding_object in embeddings_response.data] We have these functions here that are just wrappers for creating embeddings using the Ada 002 model: Python # optimal size to embed is ~512 tokens vector = get_embedding("A dog just walked past my house and yipped yipped like a Martian") # 8192 tokens limit When we vectorize the sentence, “A dog just walked past my house and yipped yipped like a Martian”, we can write a long sentence and we can calculate the embedding. No matter how long is the sentence, we will get the embeddings of the same length which is 1536. When we’re indexing documents for RAG chat apps we’re often going to be calculating embeddings for entire paragraphs up to 512 tokens is best practice. You don’t want to calculate the embedding for an entire book because that’s above the limit of 8192 tokens but also because if you try to embed long text then the nuance is going to be lost when you’re trying to compare one vector to another vector. Vector Similarity We compute embeddings so that we can calculate the similarity between inputs. The most common distance measurement is cosine similarity. We can use other methods to calculate the distance between the vectors as well; however, it is recommended to use cosine similarity when we are using the ada-002 embedding model. Below is the formula to calculate the cosine similarities of 2 vectors. Python def cosine_sim(a,b): return dot(a,b)/(mag(a) * mag(b)) How do you calculate cosine similarities? It’s the dot product over the product of the magnitudes. This tells us how similar the two vectors are. What is the angle between these two vectors in multi-dimensional space? Here we are visualizing in two-dimensional space because we can not visualize 1536 dimensions. If the vectors are close, then there’s a very small Theta. That means you know your angle Theta is near zero, which means the cosine of the angle is near 1. As the vectors get farther and further away then your cosine goes down to zero and potentially even to negative 1: Python def cosine_similarity(a, b): return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b)) sentences1 = ['The new movie is awesome', 'The new movie is awesome', 'The new movie is awesome'] sentences2 = ['djkshsjdkhfsjdfkhsd', 'This recent movie is so good', 'The new movie is awesome'] embeddings1 = get_embeddings(sentences1) embeddings2 = get_embeddings(sentences2) for i in range(len(sentences1)): print(f"{sentences1[i]} \t\t {sentences2[i]} \t\t Score: {cosine_similarity(embeddings1[i], embeddings2[i]):.4f}") So here I’ve got a function to calculate the cosine similarity and I’m using NumPy to do the math for me since that’ll be nice and efficient. Now I’ve got three sentences that are all the same and then these sentences which are different. I’m going to get the embeddings for each of these sets of sentences and then just compare them to each other. When the two sentences are the same then we see a cosine similarity of one we expect and then when a sentence is very similar, then we see a cosine similarity of 0.91 for sentence 2, and then sentence 1 is 0.74. Now when you look at this it’s hard to think about whether the 0.75 means “This is pretty similar” or “Does it mean it’s pretty dissimilar?”. When you do similarity with the Ada 002 model, there’s generally a very tight range between about .65 and 1(speaking from my experience and what I have seen so far), so this .75 is dissimilar. Vector Search Now the next step is to be able to do a vector search because everything we just did above was for similarity within the existing data set. What we want to be able to do is search for user queries. We will compute the embedding vector for that query using the same model that we did our embeddings with for the knowledge base and then we look in our Vector database and find the K closest vectors for that user query vector. Python # Load in vectors for movie titles with open('openai_movies.json') as json_file: movie_vectors = json.load(json_file) # Compute vector for query query = "My Neighbor Totoro" embeddings_response = openai_client.embeddings.create(model=AZURE_OPENAI_ADA_DEPLOYMENT, input=[query]) vector = embeddings_response.data[0].embedding # Compute cosine similarity between query and each movie title scores = [] for movie in movie_vectors: scores.append((movie, cosine_similarity(vector, movie_vectors[movie]))) # Display the top 10 results df = pd.DataFrame(scores, columns=['Movie', 'Score']) df = df.sort_values('Score', ascending=False) df.head(10) I’ve got my query which is “My Neighbor Totoro”, because those movies were only Disney movies and as far as I know, “My Neighbor Totoro” is not a Disney movie. We’re going to do a comprehensive search here, so for every single movie in those vectors, we’re going to calculate the cosine similarity between the query vector and the vector for that movie and then we’re going to create a data frame, and sort it so that we can see the most similar ones. Vector Database We have learned how to use vector search. So moving on, how do we store our vectors? We want to store in some sort of database usually a vector database or a database that has a vector extension. We need something that can store vectors and ideally knows how to index vectors. Below is a little example of Postgres code using the PG Vector extension: Python CREATE EXTENSION vector; CREATE TABLE items (id bigserial PRIMARY KEY, embedding vector(1536)); INSERT INTO items (embedding) VALUES ('[0.0014701404143124819, 0.0034404152538627386, -0.01280598994344729,...]'); CREATE INDEX ON items USING hnsw (embedding vector_cosine_ops); SELECT * FROM items ORDER BY embedding <=> '[-0.01266181, -0.0279284,...]' LIMIT 5; Here we declare our Vector column and we say it’s going to be a vector with 1536 dimensions. Then we can insert our vectors in there and select where we’re checking to see which embedding is closest to the embedding that we’re interested in. This is an index using hnsw, which is an approximation algorithm. On Azure, we have several options for Vector databases. We do have Vector support in the MongoDB vcore and also in the cosmos DB for Postgres. That’s a way you could keep your data where it is, for example; if you’re making a RAG chat application on your product inventory and your product inventory changes all the time and it’s already in the cosmos DB. Then it makes sense to take advantage of the vector capabilities there. Otherwise, we have Azure AI search, a dedicated search technology that does not just do vector search but also keyword search. It has a lot more features. It can index things from many sources and this is what I generally recommend for a really good search quality. I’m going to use Azure AI Search for the rest of this blog and we’re going to talk about all its features how it integrates and what makes it a really good retrieval system. Azure AI Search Azure AI Search is a search-as-a-service in the cloud, providing a rich search experience that is easy to integrate into custom applications, and easy to maintain because all infrastructure and administration is handled for you. AI search has vector search which you can use via your Python SDK, which I’m going to use in the blog below, but also with semantic kernel LangChain, LlamaIndex, or any of those packages that you’re using. Most of them do have support for AI search as the RAG knowledge base. To use AI Search, first, we will import the libraries. Python import os import azure.identity import dotenv import openai from azure.search.documents import SearchClient from azure.search.documents.indexes import SearchIndexClient from azure.search.documents.indexes.models import ( HnswAlgorithmConfiguration, HnswParameters, SearchField, SearchFieldDataType, SearchIndex, SimpleField, VectorSearch, VectorSearchAlgorithmKind, VectorSearchProfile, ) from azure.search.documents.models import VectorizedQuery dotenv.load_dotenv() Initialize Azure search variables: Python # Initialize Azure search variables AZURE_SEARCH_SERVICE = os.getenv("AZURE_SEARCH_SERVICE") AZURE_SEARCH_ENDPOINT = f"https://{AZURE_SEARCH_SERVICE}.search.windows.net" Set up OpenAI client based on environment variables: Python # Set up OpenAI client based on environment variables dotenv.load_dotenv() AZURE_OPENAI_SERVICE = os.getenv("AZURE_OPENAI_SERVICE") AZURE_OPENAI_ADA_DEPLOYMENT = os.getenv("AZURE_OPENAI_ADA_DEPLOYMENT") azure_credential = azure.identity.DefaultAzureCredential() token_provider = azure.identity.get_bearer_token_provider(azure_credential, "https://cognitiveservices.azure.com/.default") openai_client = openai.AzureOpenAI( api_version="2023-07-01-preview", azure_endpoint=f"https://{AZURE_OPENAI_SERVICE}.openai.azure.com", azure_ad_token_provider=token_provider) Defining a function to get the embeddings. Python def get_embedding(text): get_embeddings_response = openai_client.embeddings.create(model=AZURE_OPENAI_ADA_DEPLOYMENT, input=text) return get_embeddings_response.data[0].embedding Creating a Vector Index Now we can create an index, we will name it “index-v1”. It has a couple of fields: ID field: Like our primary key Embedding field: That is going to be a vector and we tell it how many dimensions it’s going to have. Then we also give it a profile “embedding_profile”. Python AZURE_SEARCH_TINY_INDEX = "index-v1" index = SearchIndex( name=AZURE_SEARCH_TINY_INDEX, fields=[ SimpleField(name="id", type=SearchFieldDataType.String, key=True), SearchField(name="embedding", type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, vector_search_dimensions=3, vector_search_profile_name="embedding_profile") ], vector_search=VectorSearch( algorithms=[HnswAlgorithmConfiguration( # Hierachical Navigable Small World, IVF name="hnsw_config", kind=VectorSearchAlgorithmKind.HNSW, parameters=HnswParameters(metric="cosine"), )], profiles=[VectorSearchProfile(name="embedding_profile", algorithm_configuration_name="hnsw_config")] ) ) index_client = SearchIndexClient(endpoint=AZURE_SEARCH_ENDPOINT, credential=azure_credential) index_client.create_index(index) In VecrotSearch() we will describe which algorithm or indexing strategy we want to use and we’re going to use hnsw, which stands for hierarchical navigable small world. There are a couple of other options like IVF, Exhaustive KNN, and some others. AI search supports hnsw because it works well and they’re able to do it efficiently at scale. So, we’re going to say it’s hnsw and we can tell it like what metric to use for the similarity calculations. We can also customize other hnsw parameters if you’re familiar with them. Search Using Vector Similarity Once the vector is created with the index, now we just are going to upload the documents: Python search_client = SearchClient(AZURE_SEARCH_ENDPOINT, AZURE_SEARCH_TINY_INDEX, credential=azure_credential) search_client.upload_documents(documents=[ {"id": "1", "embedding": [1, 2, 3]}, {"id": "2", "embedding": [1, 1, 3]}, {"id": "3", "embedding": [4, 5, 6]}]) Search Using Vector Similarity Now will search through the documents. We’re not doing any sort of text search, we’re only doing a vector query search. Python r = search_client.search(search_text=None, vector_queries=[ VectorizedQuery(vector=[-2, -1, -1], k_nearest_neighbors=3, fields="embedding")]) for doc in r: print(f"id: {doc['id']}, score: {doc['@search.score']}") We’re asking for the 3 nearest neighbors and we’re telling it to search the “embedding_field” because you could have multiple Vector Fields. We do this search and we can see the output scores. The score in this case is not necessarily the cosine similarity because the score can consider other things as well. There is some documentation about what score means in different situations. Python r = search_client.search(search_text=None, vector_queries=[ VectorizedQuery(vector=[-2, -1, -1], k_nearest_neighbors=3, fields="embedding")]) for doc in r: print(f"id: {doc['id']}, score: {doc['@search.score']}") We see much lower scores if we put vector = [-2, -1, -1]. I usually don’t look at the absolute scores myself you can but I typically look at the relative scores. Searching on Large Index Python AZURE_SEARCH_FULL_INDEX = "large-index" search_client = SearchClient(AZURE_SEARCH_ENDPOINT, AZURE_SEARCH_FULL_INDEX, credential=azure_credential) search_query = "learning about underwater activities" search_vector = get_embedding(search_query) r = search_client.search(search_text=None, top=5, vector_queries=[ VectorizedQuery(vector=search_vector, k_nearest_neighbors=5, fields="embedding")]) for doc in r: content = doc["content"].replace("\n", " ")[:150] print(f"Score: {doc['@search.score']:.5f}\tContent:{content}") Vector Search Strategies During vector query execution, the search engine searches for similar vectors to determine which candidates to return in search results. Depending on how you indexed the vector information, the search for suitable matches can be extensive or limited to near neighbors to speed up processing. Once candidates have been identified, similarity criteria are utilized to rank each result based on the strength of the match. There are 2 famous vector search algorithms in Azure: Exhaustive KNN: Runs a brute-force search across the whole vector space HNSW runs an approximate nearest neighbour (ANN) search. Only vector fields labeled as searchable in the index or searchFields in the query are used for searching and scoring. When To Use Exhaustive KNN Exhaustive KNN computes the distances between all pairs of data points and identifies the precise k nearest neighbors for a query point. It is designed for cases in which strong recall matters most and users are ready to tolerate the trade-offs in query latency. Because exhaustive KNN is computationally demanding, it should be used with small to medium datasets or when precision requirements outweigh query efficiency considerations. Python r = search_client.search( None, top = 5, vector_queries = [VectorizedQuery( vector = search_vector, k_nearest_neighbour = 5, field = "embedding")]) A secondary use case is to create a dataset to test the approximate closest neighbor algorithm’s recall. Exhaustive KNN can be used to generate a ground truth collection of nearest neighbors. When To Use HNSW During indexing, HNSW generates additional data structures to facilitate speedier search, arranging data points into a hierarchical graph structure. HNSW includes various configuration options that can be adjusted to meet your search application’s throughput, latency, and recall requirements. For example, at query time, you can specify options for exhaustive search, even if the vector field is HNSW-indexed. Python r = search_client.search( None, top = 5, vector_queries = [VectorizedQuery( vector = search_vector, k_nearest_neighbour = 5, field = "embedding", exhaustive = True)]) During query execution, HNSW provides quick neighbor queries by traversing the graph. This method strikes a balance between search precision and computing efficiency. HNSW is suggested for most circumstances because of its efficiency when searching massive data sets. Filtered Vector Search Now we have other capabilities when we’re doing Vector queries. You can set vector filter modes on a vector query to specify whether you want to filter before or after query execution. Filters determine the scope of a vector query. Filters are set on and iterate over nonvector string and numeric fields attributed as filterable in the index, but the purpose of a filter determines what the vector query executes over: the entire searchable space, or the contents of a search result. With a vector query, one thing you have to keep in mind is whether you should be doing a pre-filter or post-filter. You generally want to do a pre-filter: this means that you’re first doing this filter and then doing the vector search. The reason you want this is that if you did a post filter, there are some chances that you might not find a relevant vector match after that which will return empty results. Instead, what you want to do is filter all the documents and then query the vectors. Python r = search_client.search( None, top = 5, vector_queries = [VectorizedQuery( vector = query_vector, k_nearest_neighbour = 5, field = "embedding",)] vector_filter_mode = VectorFilterMode.PRE_FILTER, filter = "your filter here" ) Multi-Vector Search We also get support for multi-vector scenarios; for example, if you have an embedding for the title of a document that is different from the embedding for the body of the document. You can search these separately. We use this a lot if we’re doing multimodal queries. If we have both an image embedding and a text embedding, we might want to search both of those embeddings. Azure AI search not only supports text search but also image and audio search as well. Let’s see an example of an image search. Python import os import dotenv from azure.identity import DefaultAzureCredential, get_bearer_token_provider from azure.search.documents import SearchClient from azure.search.documents.indexes import SearchIndexClient from azure.search.documents.indexes.models import ( HnswAlgorithmConfiguration, HnswParameters, SearchField, SearchFieldDataType, SearchIndex, SimpleField, VectorSearch, VectorSearchAlgorithmKind, VectorSearchProfile, ) from azure.search.documents.models import VectorizedQuery dotenv.load_dotenv() AZURE_SEARCH_SERVICE = os.getenv("AZURE_SEARCH_SERVICE") AZURE_SEARCH_ENDPOINT = f"https://{AZURE_SEARCH_SERVICE}.search.windows.net" AZURE_SEARCH_IMAGES_INDEX = "images-index4" azure_credential = DefaultAzureCredential(exclude_shared_token_cache_credential=True) search_client = SearchClient(AZURE_SEARCH_ENDPOINT, AZURE_SEARCH_IMAGES_INDEX, credential=azure_credential) Creating a Search Index for Images We create a search index for images. This one has ID = file name and embedding. This time, the vector search dimensions are 1024 because that is the dimensions of the embeddings that come from the computer vision model, so it’s a slightly different length than the ada-002. Everything else is the same. Python index = SearchIndex( name=AZURE_SEARCH_IMAGES_INDEX, fields=[ SimpleField(name="id", type=SearchFieldDataType.String, key=True), SimpleField(name="filename", type=SearchFieldDataType.String), SearchField(name="embedding", type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, vector_search_dimensions=1024, vector_search_profile_name="embedding_profile") ], vector_search=VectorSearch( algorithms=[HnswAlgorithmConfiguration( name="hnsw_config", kind=VectorSearchAlgorithmKind.HNSW, parameters=HnswParameters(metric="cosine"), )], profiles=[VectorSearchProfile(name="embedding_profile", algorithm_configuration_name="hnsw_config")] ) ) index_client = SearchIndexClient(endpoint=AZURE_SEARCH_ENDPOINT, credential=azure_credential) index_client.create_index(index) Configure Azure Computer Vision Multi-Modal Embeddings API Here we are integrating with the Azure Computer Vision service to obtain embeddings for images and text. It uses a bearer token for authentication, retrieves model parameters for the latest version, and defines functions to get the embeddings. The `get_image_embedding` function reads an image file, determines its MIME type, and sends a POST request to the Azure service, handling errors by printing the status code and response if it fails. Similarly, the `get_text_embedding` function sends a text string to the service to retrieve its vector representation. Both functions return the resulting vector embeddings. Python import mimetypes import os import requests from PIL import Image token_provider = get_bearer_token_provider(azure_credential, "https://cognitiveservices.azure.com/.default") AZURE_COMPUTERVISION_SERVICE = os.getenv("AZURE_COMPUTERVISION_SERVICE") AZURE_COMPUTER_VISION_URL = f"https://{AZURE_COMPUTERVISION_SERVICE}.cognitiveservices.azure.com/computervision/retrieval" def get_model_params(): return {"api-version": "2023-02-01-preview", "modelVersion": "latest"} def get_auth_headers(): return {"Authorization": "Bearer " + token_provider()} def get_image_embedding(image_file): mimetype = mimetypes.guess_type(image_file)[0] url = f"{AZURE_COMPUTER_VISION_URL}:vectorizeImage" headers = get_auth_headers() headers["Content-Type"] = mimetype # add error checking response = requests.post(url, headers=headers, params=get_model_params(), data=open(image_file, "rb")) if response.status_code != 200: print(image_file, response.status_code, response.json()) return response.json()["vector"] def get_text_embedding(text): url = f"{AZURE_COMPUTER_VISION_URL}:vectorizeText" return requests.post(url, headers=get_auth_headers(), params=get_model_params(), json={"text": text}).json()["vector"] Add Image Vector To Search Index Now we process each image file in the “product_images” directory. For each image, it calls the get_image_embedding function to get the image's vector representation (embedding). Then, it uploads this embedding to a search client along with the image's filename and a unique identifier (derived from the filename without its extension). This allows the images to be indexed and searched based on their content. Python for image_file in os.listdir("product_images"): image_embedding = get_image_embedding(f"product_images/{image_file}") search_client.upload_documents(documents=[{ "id": image_file.split(".")[0], "filename": image_file, "embedding": image_embedding}]) Query Using an Image Python query_image = "query_images/tealightsand_side.jpg" Image.open(query_image) Python query_vector = get_image_embedding(query_image) r = search_client.search(None, vector_queries=[ VectorizedQuery(vector=query_vector, k_nearest_neighbors=3, fields="embedding")]) all = [doc["filename"] for doc in r] for filename in all: print(filename) We are getting the embedding for a query image and searching for the top 3 most similar image embeddings using a search client. It then prints the filenames of the matching images. Python Image.open("product_images/" + all[0]) Now let’s take it to the next level and search images using text. Python query_vector = get_text_embedding("lion king") r = search_client.search(None, vector_queries=[ VectorizedQuery(vector=query_vector, k_nearest_neighbors=3, fields="embedding")]) all = [doc["filename"] for doc in r] for filename in all: print(filename) Image.open("product_images/" + all[0]) If you see here, we searched for “Lion King." Not only did it get the reference of Lion King, but also was able to read the texts on images and bring back the best match from the dataset. Conclusion I hope you enjoyed reading the blog and learned something new. In the upcoming blogs, I will be talking more about Azure AI Search. Let’s connect on LinkedIn or GitHub. Thank you for reading!
GenAI and Coding: A Short Story When I first heard about the Semantic Kernel, I was confused by the term. As a former C++ guy who has worked with Operating Systems, I thought it had something to do with it. Though I have followed the Generative AI landscape, I felt most of it was the initial hype (Garner's hype curve). Predicting tokens was lovely, but I never thought it would influence and create usable bots that could integrate with existing applications. Then, there comes LangChain, which immediately became one of the fastest-growing open-source projects of 2023. LangChain is an AI orchestrator that helps developers sprinkle the magic of AI into their new and existing applications. LangChain came in Python, Java, and JavaScript, and there was a void for C# folks. Semantic Kernel filled this gap: although it differs from LangChain in principle and style, it was essentially built to make the .NET folks happy. Icing on the cake? It also supports Java and Python. Semantic Kernel What Is Semantic Kernel? In the evolving landscape of AI, integrating intelligent, context-aware functionalities into applications has become more crucial than ever. Microsoft’s Semantic Kernel (SK) is a robust framework that allows developers to embed AI capabilities, such as natural language processing, into .NET applications. Whether you’re looking to build chatbots, automate workflows, or enhance decision-making processes, Semantic Kernel provides a robust foundation. Semantic Kernel is an extensible framework that leverages AI models like OpenAI's GPT to enable natural language understanding and generation within .NET applications. It provides a set of APIs and tools that allow developers to create AI-driven agents capable of processing text, generating content, and meaningfully interacting with users. At its heart, Semantic Kernel focuses on "plugins" — modular, reusable components that encapsulate specific capabilities such as understanding user intent, summarizing content, or generating responses. These can be combined and orchestrated to build sophisticated AI-driven applications. Why Semantic Kernel? LangChain is excellent, and Semantic Kernel is equally fabulous. Choosing one over the other should depend on your style, programming languages, or specific use cases. For example, if you need to integrate Gen AI capabilities into a browser-based solution such as ReactJS/Angular/Vue or vanilla web application, I would use LangChain, as it supports JavaScript. Are We Only Going to Talk About Sematic Kernel? No: in this multi-part series, though the primary focus would be on Semantic Kernel, we will still explore use cases of LangChain and use it as a cousin to Semantic Kernel for specific scenarios and use cases. Enough talk! Let's build something with SK! Prerequisites Before diving into the code, ensure you have the following prerequisites: .NET 7.0 SDK or later: Download it from the .NET website. Visual Studio 2022: Ensure the ASP.NET Core workload is installed. Azure AI Model: It is possible to use OpenAI or Other models directly, but for this series, I will stick to AI models that are deployed in Azure as I have enough Azure credits as an MS MVP (P.S.: If you plan to use OpenAI’s models, you’ll need an API key, which you can obtain from the OpenAI website.) Setting Up Your Project The first step in integrating Semantic Kernel into your application is to set up the environment. Let’s start with a simple console application and then walk through adding Semantic Kernel to the mix. 1. Create a New Console Project It's fine if you prefer to create a new console application using Visual Studio or VS Code. Shell dotnet new console -n sk-console cd sk-console 2. Add the Semantic Kernel NuGet Package Add Semantic Kernel NuGet package using the following command: Shell dotnet add package Microsoft.SemanticKernel 3. Setup Semantic Kernel Open your Program.cs file and configure the Semantic Kernel service. This service will handle interactions with the AI model. Please take a look at the function AddAzureOpenAIChatCompletion(). As the name suggests, this function helps us integrate Open AI chat completion using the Open AI model hosted in Azure onto our Semantic Kernel Builder. The parameters' values are from my already deployed gtp-4o model on Azure AI Studio. I will write a separate article on deploying AI models using Azure AI Studio and link it here later. C# var builder = Kernel.CreateBuilder(); builder.AddAzureOpenAIChatCompletion( deploymentName: "<Your_Deployment_Name>", endpoint: "<Azure-Deployment-Endpoint-Ends-In:openai.azure.com>", apiKey: "<Your_API_Key>" ); var kernel = builder.Build(); Think of this KernelBuilder as similar to the ASP.NET Core HostBuilder. Before the Build() call, you would need to supply all of your plugin information(more on plugins later), so that SK would be aware of it. 4. Ask the First Question C# Console.WriteLine(await kernel.InvokePromptAsync("What is Gen AI?")); 5. Running the Application With everything configured, you’re ready to run your application. Run the below command in the terminal. Shell dotnet run 6. We Did It! All is well. Our Semantic Kernel configuration used the Deployed Azure Open AI model to answer our question. Hooray! I know, I know, this isn't much. But I still published the source code on GitHub here. This is the starting point, and we will build from here. Conclusion Semantic Kernel is a powerful tool for bringing advanced AI capabilities to your .NET applications. Following the steps outlined in this multi-part series, you can quickly get started with Semantic Kernel and integrate intelligent, context-aware functionalities into your projects. The possibilities are vast, from simple chatbots to complex, AI-driven workflows. As we dive deeper, remember that the key to effectively leveraging the Semantic Kernel is in how you define and orchestrate your skills. With a solid understanding of these basics, you're well on your way to building the next generation of intelligent applications. What's Next? Are we done? No. Now that we know how to add Semantic Kernel in a .NET Application, it is time to take this flight off the ground. We will dig deeper and deeper as we go along with this multi-part series. In "Part 2: Understanding Plugins in Semantic Kernel, A Deep Dive," we will dive deeper into plugins in the semantic kernel. We wouldn't stop there, in the following parts, we will discuss agents, local SLMs and Ollama, Semantic Kernel on ASP.NET Core applications, mixing SK with AutoGen and LangChain, and more.
The language C# stands out as the top 5th programming language in a Stack Overflow survey. It is widely used for creating various applications, ranging from desktop to mobile to cloud native. With so many language keywords and features it will be taxing to developers to keep up to date with new feature releases. This article delves into the top 10 C# keywords every C# developer should know. 1. Async and Await Keywords: async, await The introduction of async and await keywords in C# make it easy to handle asynchronous programming in C#. They allow you to write code that performs operations without blocking the main thread. This capability is particularly useful for tasks that are I/O-bound or CPU-intensive. By making use of these keywords, programmers can easily handle long-running compute operations like invoking external APIs to get data or writing or reading from a network drive. This will help in developing responsive applications and can handle concurrent operations. Example C# public async Task<string> GetDataAsync() { using (HttpClient client = new HttpClient()) { string result = await client.GetStringAsync("http://bing.com"); return result; } } 2. LINQ Keywords: from, select, where, group, into, order by, join LINQ (Language Integrated Query) provides an easy way to query various data sources, such as databases, collections, and XML, directly within C# without interacting with additional frameworks like ADO.NET, etc. By using a syntax that is identical to SQL, LINQ enables developers to write queries in a readable way. Example C# var query = from student in students where student.Age > 18 orderby student.Name select student; 3. Properties Properties are mainly members that provide a flexible mechanism to read, write, or compute the value of a private field. Generally, we hide the internal private backing fields and expose them via a public property. This enables data to be accessed easily by the callers. In the below example, Name is the property that is hiding a backing field called name, marked as private to avoid outside callers modifying the field directly. Example C# class Person { private string name; // backing field public string Name // property { get { return name; } set { name = value; } } } class Program { static void Main(string[] args) { Person P1 = new Person(); P1.Name = "Sunny"; Console.WriteLine(P1.Name); } } 4. Generics Keywords: generic, <T> Generics allows you to write the code for a class without specifying the data type(s) that the class works on. It is a class that allows the user to define classes and methods with a placeholder. The introduction of Generics in C#2.0 has completely changed the landscape of creating modular reusable code which otherwise needs to be duplicated in multiple places. Imagine you are handling the addition of 2 numbers that are of int and then comes a requirement to add floats or double datatypes. We ended up creating the same duplicate code because we already defined a method with int datatypes in the method parameters. Generics makes it easy to define the placeholders and handle logic for different datatypes. Example C# public class Print { // Generic method which can take any datatype as method parameter public void Display<T>(T value) { Console.WriteLine($"The value is: {value}"); } } public class Program { public static void Main(string[] args) { Print print = new Print(); // Call the generic method with different data types print.Display<int>(10); print.Display<string>("Hello World"); print.Display<double>(20.5); } } 5. Delegates and Events Keywords: delegate, event A delegate is nothing but an object that refers to a method that you can invoke directly via delegate without calling the method directly. Delegates are equivalent to function pointers in C++. Delegates are type-safe pointers to any method. Delegates are mainly used in implementing the call-back methods and for handling events. Func<T> and Action<T> are inbuilt delegates provided out of the box in C#. Events, on the other hand, enable a class or object to notify other classes or objects when something of interest occurs. For example, think of a scenario where a user clicks a button on your website. It generates an event (in this case button click) to be handled by a corresponding event handler code. Examples Example code for declaring and instantiating a delegate: C# public delegate void MyDelegate1(string msg); // declare a delegate // This method will be pointed to by the delegate public static void PrintMessage(string message) { Console.WriteLine(message); } public static void Main(string[] args) { // Instantiate the delegate MyDelegate1 del = PrintMessage; // Call the method through the delegate del("Hello World"); } Example code for initiating an event and handling it via an event handler: C# // Declare a delegate public delegate void Notify(); public class ProcessBusinessLogic { public event Notify ProcessCompleted; // Declare an event public void StartProcess() { Console.WriteLine("Process Started!"); // Some actual work here.. OnProcessCompleted(); } // Method to call when the process is completed protected virtual void OnProcessCompleted() { ProcessCompleted?.Invoke(); } } public class Program { public static void Main(string[] args) { ProcessBusinessLogic bl = new ProcessBusinessLogic(); bl.ProcessCompleted += bl_ProcessCompleted; // Register event handler bl.StartProcess(); } // Event handler public static void bl_ProcessCompleted() { Console.WriteLine("Process Completed!"); } } 6. Lambda Expressions Keyword: lambda, => Lambda expressions provide an easy way to represent methods, particularly useful in LINQ queries and for defining short inline functions. This feature allows developers to write readable code by eliminating the need for traditional method definitions when performing simple operations. Lambda expressions enhance code clarity and efficiency by making them an invaluable tool for developers when working with C#. Example C# Func<int, int, int> add = (x, y) => x + y; int result = add(3, 4); // result is 7 7. Nullable Types Keyword: ? In C#, nullable types allow value types to have a null state, too. This comes in handy when you're working with databases or data sources that might have null values. Adding a ? after a value type helps developers handle cases where data could be missing or not defined. This prevents in causing potential errors when the code is running. This feature makes applications more reliable by giving a clear and straightforward way to handle optional or missing data. Example: C# int? num = null; if (num.HasValue) { Console.WriteLine($"Number: {num.Value}"); } else { Console.WriteLine("No value assigned."); } 8. Pattern Matching Keywords: switch, case Pattern matching is another useful feature introduced in C# 7.0 which then underwent a series of improvements in successive versions of the language. Pattern matching takes an expression and it helps in testing whether it matches a certain criteria or not. Instead of lengthy if-else statements, we can write code in a compact way that is easy to read. In the below example, I have used object where I assigned value 5 (which is of int datatype), which then uses pattern matching to print which datatype it is. Example C# object obj = 5; if (obj is int i) { Console.WriteLine($"Integer: {i}"); } switch (obj) { case int j: Console.WriteLine($"Integer: {j}"); break; case string s: Console.WriteLine($"String: {s}"); break; default: Console.WriteLine("Unknown type."); break; } 9. Extension Methods Keyword: this (in method signature) Extension methods allow developers to add new methods to existing types without changing their original code. These methods are static but work like instance methods of the extended type, offering a smooth way to add new functionality. Extension methods make code more modular and reusable giving developers the ability to extend types from outside libraries without messing up with the original code. Extension methods also support the "Open/Closed" principle, which means code is open to extension but closed to modifications. Example C# public static class StringExtensions { public static bool IsNullOrEmpty(this string value) { return string.IsNullOrEmpty(value); } } // Usage string str = null; bool result = str.IsNullOrEmpty(); // result is true 10. Tuples Keyword: tuple Tuples let you group multiple values into one single unit. They help when you want to send back more than one value from a method without using out parameters or making a new class only for the purpose of transferring data between objects. With tuples, you can package and return a set of related values, which makes our code easier to read and understand. You can give names to the fields in tuples or leave them unnamed. You then refer to the values using Item1 and Item2 as shown below. Example C# public (int, string) GetPerson() { return (1, "John Doe"); } // Usage var person = GetPerson(); Console.WriteLine($"ID: {person.Item1}, Name: {person.Item2}"); Conclusion By using async/await to handle tasks well, LINQ to get data, Properties to keep data safe, Generics to make sure the types are right, Delegates and Events for programs that react to events, Lambda expressions to write short functions, nullable types to deal with missing info, pattern matching to make code clearer and say more, extension methods to add new features, and tuples to organize data well, you can write code that's easier to manage and less likely to break. When you get good at using these features, you'll be able to build responsive, scalable, and top-notch applications. Happy Coding!!!
A zero-to-one project is also known as a greenfield project. These projects are basically small ideas with almost no tangible work. The inherent complexities of zero-to-one projects are hard and many struggle with it. There are more chances of failures in a zero-to-one project and the reasons can be very hard to detect. This article tries to summarize the main reasons why many such projects fail. Many such projects are also called Proof of Concept (POC) or MVP (Minimum Viable Product). Of course, there are some variants of perspective here, but that's not the intent of this article. Scope Creep This is a by-product of a lack of clear vision or the stakeholder trying to expect too much from an initial version. This is very similar to a tarball analogy where each small increment leads to a big blob which becomes impossible to manage. Also, too much change in focus results in lost productivity and diminished returns. The MVP should have a clear problem statement that it solves and that should not change very often. There are ways of project management to introduce changes to the projects, but it should be of utmost importance to the stakeholders that the tradeoffs here are higher. Not Enough Market Research This happens more frequently and projects in this category are bound to fail from day one. Many times this is a cause of stakeholders being tunnel-visioned and not considering other alternatives. Interviewing users is another strategy to bypass this risk. Even after doing market research and interviewing many users, the extracted information should be reviewed by more than one person. The problem with data is that you can always find some signals, but it takes some experience to detect noise in those signals. Also, if there are established companies who are already working on similar ideas or ideas that solve the same problem differently, it can introduce necessary challenges and might cause a reason to pivot. Mismatch Between Resources Every Product Manager gets excited to hire an engineering team and build out the product. Too many times, engineers are hired with whatever skill set they have. For many challenging zero-to-one projects, the use of the proper technological solution is challenging. Hiring a proper technological consultant to lay out the plans and highlight skills to hire can pay off in the future. Another problem is to invest time if your solutions require using some cloud services. Many cloud tech companies provide low rates to let start-ups use their services. This however is only for a certain period, and when the full pricing activates, it can eat up the profits really fast. It's also very hard to pivot then and shift the application to a different provider. Hiring good talent and engineering managers for such a problem will pay off. Regulations A lot has changed in terms of how data is seen from regulators and not paying attention to them can result in unnecessary legal actions. This should be studied well before starting the journey.
DORA (DevOps Research and Assessment) metrics, developed by the DORA team have become a standard for measuring the efficiency and effectiveness of DevOps implementations. As organizations start to adopt DevOps practices to accelerate software delivery, tracking performance and reliability becomes critical. DORA metrics help organizations address these critical tasks by providing a framework for understanding how well teams are delivering software and how quickly they can recover from failures. This article will delve into DORA metrics, demonstrate how to track them using Jenkins, and explore how to use Prometheus for collecting and displaying these metrics in Observe. What Are DORA Metrics? DORA metrics are a set of four key performance indicators (KPIs) that help organizations evaluate their software delivery performance. These metrics are: Deployment Frequency (DF): Measures how often code is deployed to production Lead Time for Changes (LT): Time taken from code commit to production deployment Change Failure Rate (CFR): The percentage of changes failed in production Mean Time to Restore (MTTR): The average time it takes to recover from a failure in production These metrics are valuable because they provide actionable insights into software development and deployment practices. High-performing teams tend to deploy more frequently and have shorter lead times, lower failure rates, and quicker recovery times, leading to more resilient and robust applications. Tracking DORA Metrics in Jenkins Jenkins is a widely used automation server to enable continuous integration and delivery (CI/CD). Below is an example of how to track DORA metrics using a Jenkins pipeline, using shell commands and scripts to log deployment frequency, calculate lead time for changes, monitor change failure rate, and determine the mean time to restore. Groovy pipeline { agent any environment { DEPLOY_LOG = 'deploy.log' FAIL_LOG = 'fail.log' } //Build Application stages { stage('Build') { steps { echo 'Building the application...' // Run required build commands sh 'make build' } } // Test Application stage('Test') { steps { echo 'Running tests...' // run required test commands sh 'make test' } } // Deploy application stage('Deploy') { steps { echo 'Deploying the application...' // run the deployment steps sh 'make deploy' // Log the deployment into log file to compute deployment frequency sh "echo $(date '+%F_%T') >> ${DEPLOY_LOG}" } } } post { always { script { // Computing deployment frequency (DF) def deploymentCount = sh(script: "wc -l < ${DEPLOY_LOG}", returnStdout: true).trim() echo "# of Deployments: ${deploymentCount}" // Writing build failures into log for computing CFR if (currentBuild.result == 'FAILURE') { sh "echo $(date '+%F_%T') >> ${FAIL_LOG}" } // Computing Change Failure Rate (CFR) def failureCount = sh(script: "wc -l < ${FAIL_LOG}", returnStdout: true).trim() def CFR = (failureCount.toInteger() * 100) / deploymentCount.toInteger() echo "Change Failure Rate: ${CFR}%" // Computing Lead Time for Changes(LTC) using last commit and deploy times def commitTime = sh(script: "git log -1 --pretty=format:'%ct'", returnStdout: true).trim() def currentTime = sh(script: "date +%s", returnStdout: true).trim() def leadTime = (currentTime.toLong() - commitTime.toLong()) / 3600 echo "Lead Time for Changes: ${leadTime} hours" } } //End if pipeline success { echo 'Deployment Successful!' } failure { echo 'Deployment failed!' // Failure handling } } } In the above script, each deployment is logged as a timestamp in the deploy file, which can be used to determine the deployment frequency as you go. Similarly, failures are logged as timestamps in the fail log file and both counts are used to compute change failure rate. Additionally, the time difference between the last commit time and the current time provides the lead time for changes. Monitoring DORA Metrics With Prometheus and Observe Prometheus is an open-source monitoring and alerting toolkit commonly used for collecting metrics from applications. Combined with Observe, a modern observability platform, Prometheus can be used to visualize and monitor DORA metrics in real-time. Install Prometheus on server: Download and install Prometheus from the link. Configure Prometheus: Set up the prometheus.yml configuration file to define the metrics to be collected and time intervals. Example configuration: YAML #setting time interval at which metrics are collected global: scrape_interval: 30s #Configuring Prometheus to collect metics from Jenkins on specific port scrape_configs: - job_name: 'jenkins' static_configs: - targets: ['<JENKINS_SERVER>:<PORT>'] Expose Metrics in Jenkins: You can use either the Prometheus plugin for Jenkins or a custom script to expose metrics in a format that Prometheus can use to collect. Example Python script: Python from prometheus_client import start_http_server, Gauge import random import time # Creating Prometheus metrics gauges for the four DORA KPIs DF = Gauge('Deployment Frequency', 'No. of deployments in a day') LT = Gauge('Lead Time For Changes', 'Average lead time for changes in hours') CFR = Gauge('Change Failure Rate', 'Percentage of changes failures in production') MTTR = Gauge('Mean Time To Restore', 'Mean time to restore service after failure in minutes') #Start server start_http_server(8000) #Sending random values to generate sample metrics to test while True: DF.set(random.randint(1, 9)) LT.set(random.uniform(1, 18)) CFR.set(random.uniform(0, 27)) MTTR.set(random.uniform(1, 45)) #Sleep for 30s time.sleep(30) Save this script on the server where Jenkins is running and run it to expose the metrics on port 8000. Add Prometheus Data Source to Observe: Observe is a monitoring and observability tool that provides advanced features for monitoring, analyzing, and visualizing observability data. In Observe, you can add Prometheus as a data source by navigating to the integrations section and configuring Prometheus with the appropriate endpoint URL. Set up Dashboards in Observe, and create dashboards with widgets to display graphs for these different metrics. Set up monitoring to configure alerts on set thresholds and analyze trends and patterns by drilling down into specific metrics. Conclusion DORA metrics are essential for assessing the performance and efficiency of DevOps practices. By implementing tracking in Jenkins pipelines and leveraging monitoring tools like Prometheus and Observe, organizations can gain deep insights into their software delivery processes. These metrics help teams continuously improve, making data-driven decisions that enhance deployment frequency, reduce lead time, minimize failures, and accelerate recovery. Adopting a robust observability strategy ensures that these metrics are visible to stakeholders, fostering a culture of transparency and continuous improvement in software development and delivery.
I started evaluating Google's Gemini Code Assist development in December 2023, almost about its launch time. The aim of this article is to cover its usage and impact beyond basic code generation on all the activities that a developer is supposed to do in his daily life (especially with additional responsibilities entrusted to developers these days with the advent of "Shift-Left" and full stack development roles). Gemini Code Assist Gemini Code Assist can be tried at no cost until November 2024. These are the core features it offered at the time of carrying out this exercise: AI code assistance Natural language chat AI-powered smart actions Enterprise security and privacy Refer to the link for more details and pricing: Gemini Code Assist. Note Gemini Code Assist was formerly known as Duet AI. The entire content of the study has been divided into two separate articles. Interested readers should go through both of them in sequential order. The second part will be linked following its publication. This review expresses a personal view specific to Gemini Code Assist only. As intelligent code assist is an evolving field, the review points are valid based on features available at the time of carrying out this study. Gemini Code Assist Capabilities: What’s Covered in the Study as per Features Availability Gemini Pro Code Customization Code transformations ✓ Available for all users ✓ Local Context from Relevant files in local folder x Use Natural Language to modify existing code e.g java 8 to Java 21 ✓ Chat × Remote Context from private codebases ✓ Improve Code Generations ✓ Smart Actions Note: 1. Items marked with x will be available in future releases of Gemini Code Assist. 2. Code Transformations is not released publicly and is in preview at the time of writing. Technical Tools Below are technical tools used for different focus areas during the exercise. The study is done on the specified tools, languages, and frameworks below, but the results can be applicable to other similar modern languages and frameworks with minor variations. Focus Areas Tools Language and Framework Java 11 & 17; Spring Boot 2.2.3 & 3.2.5 Database Postgres Testing Junit, Mockito IDE and Plugins VS Studio Code with extensions: Cloud Code Extension Gemini Code Assist Cloud Platform GCP with Gemini API Enabled on a project (Pre-requisite) Docker Cloud SQL (Postgres) Cloud Run Development Lifecycle Stages and Activities For simplicity, the entire development lifecycle has been divided into different stages (below) encompassing different sets of activities that developers would normally do. For each lifecycle stage, some activities were selected and tried out in VS Code Editor using Gemini Code Assist. S.No# stage Activities 1 Bootstrapping Gain deeper Domain Understanding via Enterprise Knowledge Base: Confluence, Git repos, etc. Generate Scaffolding Code for Microservices: Controller, services, repository, models Pre-Generated Templates for Unit and Integration Tests Database Schema: Table creation, relationships, scripts test-data population 2 Build and Augment Implement Business Logic/Domain Rules Leverage Implementation Patterns: e.g., Configuration Mgt, Circuit Breaker, etc. Exception and Error Handling Logging/Monitoring Optimized Code for performance: asynchronous, time-outs, concurrency, non-blocking, remove boilerplate 3 Testing and Documentation Debugging: Using Postman to test API Endpoints Unit/Integration Tests Open API Specs Creation Code Coverage; Quality; Code Smells Test Plan Creation 4 Troubleshoot Invalid/No Responses or application errors 5 Deployment Deploy Services to GCP Stack: Cloud Run/GKE/App Engine, Cloud SQL 6 Operate Get assistance modifying/upgrading existing application code and ensuring smooth operations Requirements Let's now consider a fictitious enterprise whose background and some functional requirements are given below. We will see to what extent Gemini Code Assist can help in fulfilling them. Background Functional Requirements A fictitious enterprise that moved to the cloud or adopted “cloud-native” a few years back: Domain: E-commerce Let’s keep discussion centric to “Microservices” using Spring Boot and Java Grappling with multi-fold technical challenges: Green Field (new microservices to be created Brown Field (breaking monolithic to microservices, integration with legacy systems) Iterative Development (incremental updates to microservices, upgrades, code optimization, patches) Allow List Products in Catalog Add, Modify, and Delete Products in the Catalog Recommendation Service Asynchronous implementation to retrieve the latest price Query affiliated shops for a product and fetch the lowest price for a product Bulk addition of products and grouping of processed results based on success and failure status Rules:A product will belong to a single category and a category may have many products. Let's Start with Stage 1, Bootstrapping, to gain deeper domain understanding. 1. Bootstrapping During this phase, developers will: Need more understanding of domain (i.e., e-commerce, in this case) from Enterprise Knowledge Management (Confluence, Git, Jira, etc.). Get more details about specific services that will need to be created. Get a viewpoint on the choice of tech stack (i.e., Java and Spring Boot) with steps to follow to develop new services. Let’s see how Gemini Code Assist can help in this regard and to what extent. Prompt: "I want to create microservices for an e-commerce company. What are typical domains and services that need to be created for this business domain" Note: Responses above by Gemini Code Assist: Chat are based on information retrieved from public online/web sources on which it is trained, and not retrieved from the enterprise’s own knowledge sources, such as Confluence. Though helpful, this is generic e-commerce information. In the future when Gemini Code Assist provides information more contextual to the enterprise, it will be more effective. Let’s now try to generate some scaffolding code for the catalog and recommendation service first as suggested by Code Assist. First, we will build a Catalog Service through Gemini Code Assist. A total of 7 steps along with code snippets were generated. Relevant endpoints for REST API methods to test the service are also provided once the service is up. Let's begin with the first recommended step, "Create a new Spring Boot project." Building Catalog Service, Step 1 Generate project through Spring Initializr: Note: Based on user prompts, Gemini Code Assist generates code and instructions to follow in textual form. Direct generation of files and artifacts is not supported yet. Generated code needs to be copied to files at the appropriate location. Building Catalog Service, Steps 2 and 3 Add dependency for JPA, and define Product and Category entities: Building Catalog Service, Step 4 Create Repository interfaces: Building Catalog Service, Step 5 Update Service layer: Building Catalog Service, Steps 6 and 7 Update the Controller and run the application: Building Catalog Service, Additional Step: Postgres Database Specific This step was not initially provided by Gemini Code Assist, but is part of an extended conversation/prompt by the developer. Some idiosyncrasies — for example, the Postgres database name — can not contain hyphens and had to be corrected before using the generated scripts. Building Through Gemini Code Assist vs Code Generators A counterargument to using Gemini Code Assist can be that a seasoned developer without Gemini Code Assist may be able to generate scaffolding code with JPAEntities quickly based on his past experience and familiarity with existing codebase using tools such as Spring Roo, JHipster, etc. However, there may be a learning curve, configuration, or approvals required before such tools can be used in an enterprise setup. The ease of use of Gemini Code Assist and the flexibility to cater to diverse use cases across domains makes it a viable option even for a seasoned developer, and it can, in fact, complement code-gen tools and be leveraged as the next step to initial scaffolding. 2. Build and Augment Now let's move to the second stage, Build and Augment, and evolve the product catalog service further by adding, updating, and deleting products generated through prompts. Generate a method to save the product by specifying comments at the service layer: Along similar lines to the product-catalog service, we created a Recommendation service. Each of the steps can be drilled down further as we did during the product-catalog service creation. Now, let's add some business logic by adding a comment and using Gemini Code Assist Smart Actions to generate code. Code suggestions can be generated not only by comment, but Gemini Code Assist is also intelligent enough to provide suggestions dynamically based on developer keyboard inputs and intent. Re-clicking Smart Actions can give multiple options for code. Another interactive option to generate code is the Gemini Code Assist Chat Feature. Let’s now try to change existing business logic. Say we want to return a map of successful and failed product lists instead of a single list to discern which products were processed successfully and which ones failed. Let's try to improve the existing method by an async implementation using Gemini Code Assist. Next, let's try to refactor an existing code by applying a strategy pattern through Gemini Code Assist. Note: The suggested code builds PricingStrategy for Shops; e.g., RandomPricing and ProductLength pricing. But, still, this is too much boilerplate code, so a developer, based on his experience, should probe further with prompts to reduce the boilerplate code. Let's try to reduce boilerplate code through Gemini Code Assist. Note: Based on the input prompt, the suggestion is to modify the constructor of the shop class to accept an additional function parameter for pricingstrategy using Lambdas. Dynamic behavior can be passed during the instantiation of Shop class objects. 3. Testing and Documentation Now, let's move to stage 3, testing and documentation, and probe Gemini Code Assist on how to test the endpoint. As per the response, Postman, curl, unit tests, and integration tests are some options for testing provided by Gemini Code Assist. Now, let's generate the payload from Gemini Code Assist to test the /bulk endpoint via Postman. Let's see how effective Gemini Code Assist generated payloads are by hitting the /bulk endpoint. Let's see if we can fix it with Gemini Code Assist so that invalid category IDs can be handled using product creation. Next, let's generate Open AI Specifications for our microservices using Gemini Code Assist. Note: Documenting APIs so that it becomes easy for API consumers to call and integrate these API(s) in their applications is a common requirement in microservices projects. However, it is often a time-consuming activity for developers. Swagger/Open API Specs is a common format followed to document REST APIs. Gemini Code Assist generated Open API Specs that matched the expectations in this regard. Next, we are generating unit test cases at the Controller layer. Following a similar approach, unit test cases can be generated at other layers; i.e., service and repository, too. Next, we ran the generated unit test cases and checked if we encountered any errors. 4. Troubleshooting While running this application, we encountered an error on table and an entity name mismatch, which we were able to rectify with Gemini Code Assist help. Next, we encountered empty results on the get products call when data existed in the products table. To overcome this issue, we included Lombok dependencies for missing getters and setters. Debugging: An Old Friend to the Developer’s Rescue The debugging skill of the developer will be handy, as there would be situations where results may not be as expected for generated code, resulting in hallucinations. We noted that a developer needs to be aware of concepts such as marshalling, unmarshalling, and annotations such as @RequestBody to troubleshoot such issues and then get more relevant answers from Gemini Code Assist. This is where a sound development background will come in handy. An interesting exploration in this area could be whether Code Assist tools can learn and be trained on issues that other developers in an enterprise have encountered during the development while implementing similar coding patterns. The API call to create a new product finally worked after incorporating the suggestion of adding @RequestBody. Handling exceptions in a consistent manner is a standard requirement for all enterprise projects. Create a new package for exceptions, a base class to extend, and other steps to implement custom exceptions. Gemini Code Assist does a good job of meeting this requirement. Handling specific exceptions such as "ProductNotFound": Part 1 Conclusion This concludes Part 1 of the article. In Part 2, I will cover the impact of Gemini Code Assist on the remainder of the lifecycle stages, Deployment and Operate; also, productivity improvements in different development lifecycle stages, and the next steps prescribed thereof.
Unit Tests Unit testing is a fundamental part of software development that ensures individual components of your code work as expected. In Go, unit tests are straightforward to write and execute, making them an essential tool for maintaining code quality. What Is a Unit Test? A unit test is a small, focused test that validates the behavior of a single function or method. The goal is to ensure that the function works correctly in isolation, without depending on external systems like databases, file systems, or network connections. By isolating the function, you can quickly identify and fix bugs within a specific area of your code. How Do Unit Tests Look in Go? Go has built-in support for testing with the testing package, which provides the necessary tools to write and run unit tests. A unit test in Go typically resides in a file with a _test.go suffix and includes one or more test functions that follow the naming convention TestXxx. Here’s an example: Go package math import "testing" func Add(a, b int) int { return a + b } func TestAdd(t *testing.T) { result := Add(2, 3) expected := 5 if result != expected { t.Errorf("Add(2, 3) = %d; want %d", result, expected) } } In this example, the TestAdd function tests the Add function. It checks if the output of Add(2, 3) matches the expected result, 5. If the results don’t match, the test will fail, and the error will be reported. How To Execute Unit Tests Running unit tests in Go is simple. You can execute all tests in a package using the go test command. From the command line, navigate to your package directory and run: go test This command will discover all files with the _test.go suffix, execute the test functions, and report the results. For more detailed output, including the names of passing tests, use the -v flag: go test -v If you want to run a specific test, you can use the -run flag followed by a regular expression that matches the test name: go test -run TestAdd When To Use Unit Tests Unit tests are most effective when: Isolating bugs: They help isolate and identify bugs early in the development process. Refactoring code: Unit tests provide a safety net that ensures your changes don’t break existing functionality. Ensuring correctness: They verify that individual functions behave as expected under various conditions. Documenting code: Well-written tests serve as documentation, demonstrating how the function is expected to be used and what outputs to expect. In summary, unit tests in Go are easy to write, execute, and maintain. They help ensure that your code behaves as expected, leading to more robust and reliable software. In the next section, we’ll delve into integration tests, which validate how different components of your application work together. Integration Tests While unit tests are crucial for verifying individual components of your code, integration tests play an equally important role by ensuring that different parts of your application work together as expected. Integration tests are particularly useful for detecting issues that may not be apparent when testing components in isolation. What Is an Integration Test? An integration test examines how multiple components of your application interact with each other. Unlike unit tests, which focus on a single function or method, integration tests validate the interaction between several components, such as functions, modules, or even external systems like databases, APIs, or file systems. The goal of integration testing is to ensure that the integrated components function correctly as a whole, detecting problems that can arise when different parts of the system come together. How Do Integration Tests Look in Go? Integration tests in Go are often structured similarly to unit tests but involve more setup and possibly external dependencies. These tests may require initializing a database, starting a server, or interacting with external services. They are typically placed in files with a _test.go suffix, just like unit tests, but may be organized into separate directories to distinguish them from unit tests. Here’s an example of a basic integration test: Go package main import ( "database/sql" "testing" _ "github.com/mattn/go-sqlite3" ) func TestDatabaseIntegration(t *testing.T) { db, err := sql.Open("sqlite3", ":memory:") if err != nil { t.Fatalf("failed to open database: %v", err) } defer db.Close() // Setup - Create a table _, err = db.Exec("CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT)") if err != nil { t.Fatalf("failed to create table: %v", err) } // Insert data _, err = db.Exec("INSERT INTO users (name) VALUES ('Alice')") if err != nil { t.Fatalf("failed to insert data: %v", err) } // Query data var name string err = db.QueryRow("SELECT name FROM users WHERE id = 1").Scan(&name) if err != nil { t.Fatalf("failed to query data: %v", err) } // Validate the result if name != "Alice" { t.Errorf("expected name to be 'Alice', got '%s'", name) } } In this example, the test interacts with an in-memory SQLite database to ensure that the operations (creating a table, inserting data, and querying data) work together as expected. This test checks the integration between the database and the code that interacts with it. How To Execute Integration Tests You can run integration tests in the same way as unit tests using the go test command: go test However, because integration tests might involve external dependencies, it’s common to organize them separately or use build tags to distinguish them from unit tests. For example, you can create an integration build tag and run your tests like this: Go // +build integration package main import "testing" func TestSomething(t *testing.T) { // Integration test logic } To execute only the integration tests, use: go test -tags=integration This approach helps keep unit and integration tests separate, allowing you to run only the tests that are relevant to your current development or CI/CD workflow. When To Use Integration Tests Integration tests are particularly useful in the following scenarios: Testing interactions: When you need to verify that different modules or services interact correctly End-to-end scenarios: For testing complete workflows that involve multiple parts of your application, such as user registration or transaction processing Validating external dependencies: To ensure that your application correctly interacts with external systems like databases, APIs, or third-party services Ensuring system stability: Integration tests help catch issues that may not be apparent in isolated unit tests, such as race conditions, incorrect data handling, or configuration problems. Summary In summary, integration tests in Go provide a powerful way to ensure that your application’s components work together correctly. While they are more complex and may require additional setup compared to unit tests, they are invaluable for maintaining the integrity of your software as it scales and becomes more interconnected. Together with unit tests, integration tests form a comprehensive testing strategy that helps you deliver robust, reliable applications.
Agile transformations can be tough. They’re messy, time-consuming, and more often than not, they fail to deliver the promises that got everyone excited in the first place. That’s why it’s so important to approach an Agile transformation as a full-scale organizational change rather than just a shift in how our development teams work. In my years as a change management consultant, I have studied and applied various change management models, from John Kotter’s 8-Step Change Model to ADKAR and Lean Change Management by Jason Little. I have learned through these experiences and countless transformations that there isn’t a one-size-fits-all approach. That’s why I have developed the VICTORY framework. It’s a straightforward approach, blending the best practices from multiple models with practical insights from leading Agile transformations at scale. The idea is to make it easy to remember and apply, no matter the size or complexity of the organization. The VICTORY Framework for Transformation The VICTORY framework is designed to guide organizations through the often chaotic and challenging process of organizational transformation — not just Agile Transformation. Following this framework ensures the change is not just strategic but sustainable. Here’s how it works: V: Validate the Need for Change Every transformation has to start with a strong reason. Before diving into the change, it’s crucial to validate why the transformation is necessary. What are the core issues driving this change? What happens if we don’t make these changes? We need to establish a sense of urgency to get everyone aligned and committed. Without a compelling “Why,” it’s tough to get the buy-in needed for a successful transformation. Steps To Take Analyze the current challenges and pain points. Engage with key stakeholders to understand their perspectives. Clearly communicate the risks of staying the course without change. I: Initiate Leadership Support Strong leadership is the backbone of any successful transformation. We start by securing solid support from executive leaders and finding champions within the organization who can help drive the change. These leaders will be our advocates, offering feedback and refining the transformation goals as we go along. Steps To Take Get top executives on board and invest in the change. Identify and empower champions across different levels of the organization. Set up channels for continuous communication and feedback. C: Craft a Clear Vision A transformation without a clear vision is like setting off on a journey without a map. We need a vision that is motivating, realistic, and capable of bringing everyone together. This vision should clearly explain why the change is necessary and what the organization will look like once the transformation is complete. It’s also important to test this vision with small groups to make sure it resonates with people at all levels. Steps To Take Develop a vision statement that aligns with the organization’s overall goals. Communicate this vision consistently across the organization. Gather feedback to ensure the vision is clear and inspiring. T: Target Goals and Outcomes With our vision in place, it’s time to get specific about what we want to achieve. We define clear, measurable goals and outcomes. Establishing metrics is crucial — these will keep us on track and provide a way to measure success. This is also the stage where we’ll need to create or adapt tools that will help us track progress effectively. Steps To Take Set specific, achievable goals aligned with the vision. Define key objectives and results to monitor progress. Review and adjust goals regularly as the transformation unfolds. O: Onboard With Pilot Teams Instead of launching the transformation organization-wide from the get-go, we start with pilot teams. These teams will help us test new structures, roles, tools, and processes. It’s essential to provide them with the necessary training and support to set them up for success. The insights we gain from these pilots will be invaluable in identifying potential challenges and making adjustments before scaling up. Steps To Take Choose pilot teams that represent a cross-section of the organization. Provide tailored training and ongoing support. Monitor the pilot phase closely to gather insights. R: Review and Adapt Continuous improvement is at the heart of Agile, and that applies to our transformation process, too. We regularly review how the pilot teams are progressing, gather feedback, and measure outcomes. This approach allows us to learn from early experiences and make necessary adjustments before the transformation goes organization-wide. Steps To Take Hold regular retrospectives with pilot teams to gather insights. Adjust the transformation strategy based on what’s working (and what’s not). Share learnings across the organization to keep everyone informed and engaged. Y: Yield To Continuous Scaling Once our pilots are running smoothly, it’s time to scale the transformation across the organization — but do it gradually. Expanding in phases allows us to manage the change more effectively. During this phase, we ensure that governance structures, roles, and performance metrics evolve alongside the new ways of working. Keeping leadership engaged is critical to removing obstacles and celebrating wins as we go. Steps To Take Plan a phased rollout of the transformation. Align governance structures with the new processes. Maintain executive engagement and celebrate every milestone. Don’t Forget the Individual Impact As our organization undergoes this transformation, it’s crucial not to overlook the individuals who will be affected by these changes. This means understanding how the transformation will impact roles, responsibilities, and workflows at a personal level. Each person should feel that they have something positive to look forward to, whether it’s new opportunities for growth, skill development, or simply a more satisfying job. Steps To Take Assess how each role will be impacted by the transformation. Align individual roles with the new ways of working, making sure everyone understands the benefits. Offer opportunities for growth and development that align with the transformation’s goals. Wrapping It Up The VICTORY framework provides a structured yet flexible approach to transformation. By validating the need for change, securing leadership support, crafting a clear vision, targeting specific goals, onboarding with pilot teams, continuously reviewing and adapting, and scaling the transformation gradually, we can navigate the complexities of any kind of transformation effectively. Moreover, focusing on the individual impact of the transformation ensures that the change is not just successful at the organizational level but also embraced by the people who make up the organization. This framework offers a practical roadmap for organizations looking to become more Agile and adaptive in today’s rapidly changing business environment. By following the VICTORY framework, we can increase our chances of a successful, sustainable transformation that benefits both the organization and the individuals within it.
Sometimes in our careers, we feel like we're not progressing from our current level. Well, who doesn't? The least said about it, it is an annoying place to be: working hard, putting in long hours, and feeling like our career is going nowhere. I was there too. So, after having navigated my way through it, here's advice I wish I could give my past self. The Reality Check: Why Hard Work Alone Isn’t Enough I’ve noticed a common myth at workplaces that working long hours will get you a promotion. The reality is that putting in extra hours is not a differentiator. While dedication and a strong work ethic are important, it may not take you to the next level. Any leadership looks for people who can tackle higher-level challenges and lead through influence rather than effort. You have more chances to move forward if you are a strategic thinker rather than a hard worker. A lot of successful experts who have climbed the ladder have a few things in common: they prioritize strategic impact, lead by example, and take the initiative to tackle further complex challenges. Camille Fournier, former CTO of Rent the Runway, emphasizes in her book "The Manager's Path" that transitioning from an individual contributor to a leadership role requires a focus on guiding others and taking on projects that impact the entire organization. Think about the following engineers. The first engineer regularly completes work by working after hours, produces a lot of code, and meets deadlines. In contrast, the second one assumes responsibility for cross-functional projects, focuses on identifying trends, solving issues that affect numerous teams, and shares knowledge to elevate team productivity. Although both engineers are valuable, the second engineer has a much higher chance of being given a promotion. Why? He is not only making a positive contribution to the team but also driving its success and exhibiting crucial leadership traits at the senior level. The Path to Promotion: Focus on Next-Level Problems and Leadership To get past mid-senior you need to stop focusing on the work you do and focus on the impact of that work. Will Larson, in his book, "An Elegant Puzzle," discusses how senior engineers should focus on high-leverage activities, such as system design and mentorship, rather than getting caught up in day-to-day coding tasks. Below are three-step strategies that can help you grow. 1. Work on Next-Level Scoped Problems At the mid-senior level, technical skills are expected. What distinguishes you is your ability to work on problems of a larger scope and greater strategic importance. These are problems that require not only technical expertise but also business acumen and the ability to connect your solutions to the company's long-term goals. Here is an example of owning a cross-team initiative. Suppose there are problems integrating a certain new technology stack across different products. Instead of contributing to this initiative by writing code for only your application, one could take ownership of the entire project. This would be carried out by coordinating with the involved teams, understanding the variations in need and constraint, and thereafter delivering a solution keeping in view the overall strategy of the product platform. This means you're solving more than a problem: you can show that you can manage complexity, influence others, and drive outcomes that have a material impact on the business. 2. Deliver Through Others As you go up the career ladder, success for you will be delivered through influencing and guiding people around you. This perhaps will be one of the most important transitions from an individual contributor to a leader. Whether you mentor, delegate, and collaborate across teams is going to be most crucial for your promotion. Suppose you are tasked with the implementation of some new feature. Instead of making the whole implementation yourself, you realize that there's a junior who can take on some portions of the implementation. You then invest in teaching them through proper specification and best practices needed. By effectively delegating, you empower others, freeing your time for more strategic activities at higher levels. This way, you show that you can lead a team, which is one of the competencies needed to access senior positions. 3. Think Strategically and Be Proactive The level at which you work requires good execution; you must think strategically. That means knowing the context of the business, knowing what problems are going to happen in the future, and therefore proactively suggesting solutions. Suppose you find your company's development process hemorrhaging inefficiencies, slowing down the release cycle. Maybe instead of waiting for another person to fix that problem, you could propose a new initiative that smooths out the process. You would research best practices, build stakeholder support, and lead the implementation of the new process. This proves that apart from you being a problem solver, you are also a strategic thinker who can seize opportunities and ensure change. Final Words: Shift Your Mindset, Elevate Your Career If you feel stuck at your current level, it’s time to rethink your approach. Growing is not just about working harder — they’re about demonstrating your ability to handle next-level challenges and lead others to success. Jeff Shannon, author of "Hard Work is Not Enough: The Surprising Truth about Being Believable at Work," wrote that people will tell you that hard work will take you far, but it won't. He believes that "hard work is a good start" to get yourself established early in your job, but it is not enough to help you rise to the top. Start by focusing on problems that have a broader impact, mentor and guide your peers, and think strategically about how you can contribute to the company’s long-term goals. By making these shifts, you’ll not only position yourself for progression but also develop the skills and mindset needed to thrive in senior-level roles. It isn’t just about the hours you put in—it’s about the difference you make.