Cloud architecture refers to how technologies and components are built in a cloud environment. A cloud environment comprises a network of servers that are located in various places globally, and each serves a specific purpose. With the growth of cloud computing and cloud-native development, modern development practices are constantly changing to adapt to this rapid evolution. This Zone offers the latest information on cloud architecture, covering topics such as builds and deployments to cloud-native environments, Kubernetes practices, cloud databases, hybrid and multi-cloud environments, cloud computing, and more!
How to Run Apache Spark on Kubernetes in Less Than 5 Minutes
Using SQS With JMS for Legacy Applications
In today's digital world, email is the go-to channel for effective communication, with attachments containing flyers, images, PDF documents, etc. However, there could be business requirements for building a service for sending an SMS with an attachment as an MMS (Multimedia Messaging Service). This article delves into how to send multiple media messages (MMS), their limitations, and implementation details using the AWS Pinpoint cloud service. Setting Up AWS Pinpoint Service Setting Up Phone Pool In the AWS console, we navigate to AWS End User Messaging and set up the Phone pool. The phone pool comprises the phone numbers from which we will send the message; these are the numbers from which the end user will receive the MMS message. Figure 1: AWS End User Messaging Figure 2: Phone Pool We can add the origination numbers once the phone pool has been created. The originating numbers are 10DLC (10-digit long code). A2P (application-to-person) 10DLC is a method businesses use to send direct text messages to customers. It is the new US-wide system and standard for companies to communicate with customers via SMS or MMS messages. Configure Configuration Set for Pinpoint MMS Messages After creating the phone pool, we create the configuration set required to send the Pinpoint message. Figure 3: Configuration Set Configuration sets help us log our messaging events, and we can configure where to publish events by adding event destinations. In our case, we configure the destination as CloudWatch and add all MMS events. Figure 4: Configuration Set Event destinations Now that all the prerequisites for sending MMS messages are complete let's move on to the implementation part of sending the MMS message in our Spring Microservice. Sending MMS Implementations in Spring Microservice To send the multimedia attachment, we first need to save the attachment to AWS S3 and then share the AWS S3 path and bucket name with the routing that sends the MMS. Below is the sample implementation in the Spring Microservice for sending the multimedia message. Java @Override public String sendMediaMessage(NotificationData notification) { String messageId = null; logger.info("SnsProviderImpl::sendMediaMessage - Inside send message with media"); try { String localePreference = Optional.ofNullable(notification.getLocalePreference()).orElse("en-US"); String originationNumber = ""; if (StringUtils.hasText(fromPhone)) { JSONObject jsonObject = new JSONObject(fromPhone); if (jsonObject != null && jsonObject.has(localePreference)) { originationNumber = jsonObject.getString(localePreference); } } SendMediaMessageRequest request = SendMediaMessageRequest.builder() .destinationPhoneNumber(notification.getDestination()) .originationIdentity(originationNumber) .mediaUrls(buildS3MediaUrls(notification.getAttachments())) .messageBody(notification.getMessage()) .configurationSetName("pinpointsms_set1") .build(); PinpointSmsVoiceV2Client pinpointSmsVoiceV2Client = getPinpointSmsVoiceV2Client(); SendMediaMessageResponse resp = pinpointSmsVoiceV2Client.sendMediaMessage(request); messageId = resp != null && resp.sdkHttpResponse().isSuccessful() ? resp.messageId() : null; } catch (Exception ex) { logger.error("ProviderImpl::sendMediaMessage, an error occurred, detail error:", ex); } return messageId; } Here, the NotificationData object is the POJO, which contains all the required attributes for sending the message. It contains the destination number and the list of attachments that need to be sent; ideally, there would be only one attachment. The Attachment object contains the S3 path and the bucket name. Below is the implementation for buildS3MediaUrls. We need to send the S3 path and bucket name in a specific format, as shown in the below implementation, it has to be s3://{{bucketName}/{{S3Path}: Java public List<String> buildS3MediaUrls(List<Attachment> attachments) { List<String> urls = new ArrayList<>(); for (Attachment attachment : attachments) { String url = String.format("s3://%s/%s", attachment.getAttachmentBucket(), attachment.getAttachmentFilePath()); urls.add(url); } return urls; } Here is the definition for getPinpointSmsVoiceV2Client: Java protected PinpointSmsVoiceV2Client getPinpointSmsVoiceV2Client() { return PinpointSmsVoiceV2Client.builder() .credentialsProvider(DefaultCredentialsProvider.create()) .region(Region.of(this.awsRegion)).build(); } The messageId returned persists in our database and is used to track the message status further. Types of Attachments We can send various multimedia content using Pinpoint such as images, PDF files, audio, and video files. This enables us to cater to various business use cases, such as sending new product details, invoices, estimates, etc. Attachment size has certain limitations: a single MMS message cannot exceed 600KB in size for media files. We can send various types of content, including: PDF - Portable Document FormatImage files like PDG, JPEG, GIFVideo/Audio - MP4, MOV Limitations and Challenges AWS Pinpoint, with its scalable service, is a robust platform. However, it does have certain limitations, such as the attachment file size, which is capped at 600KB. This could pose a challenge when attempting to send high-resolution image files.Cost: Sending attachments for MMS is comparatively costlier than just sending SMS using AWS SNS. For MMS, the cost is $0.0195 (Base Price) + $0.0062 (Carrier Fee) = $0.0257 per message, while the cost for AWS SNS SMS is $0.00581 (Base Price) + $0.00302 (Carrier Fee) = $0.00883 per message. So, the MMS is three times costlier. AWS Pinpoint has a lot of messaging capabilities, including free messaging for specific types of messages like email, email, and push notifications. MMS is not part of the free tier.Tracking messages for end-to-end delivery can be challenging. Usually, with the AWS Lambda and CloudWatch combination, we should be able to track it end to end, but this requires additional setup.Opening attachments for different types of devices could be challenging. Network carriers could block files for specific types of content. Conclusion AWS Pinpoint offers reliable, scalable services for sending multimedia messages. We can send various media types as long as we adhere to the file size limitation. Using Pinpoint, organizations can include multi-media messaging options as part of their overall communication strategy.
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Enterprise Security: Reinforcing Enterprise Application Defense. Many companies wrongly believe that moving to the cloud means their cloud provider is fully responsible for security. However, most known cloud breaches are caused by misconfigurations on the customer's end, not the provider's. Cloud security posture management (CSPM) helps organizations avoid this problem by implementing automated guardrails to manage compliance risks and identify potential misconfigurations that could lead to data breaches. The term CSPM was first coined by Gartner to define a category of security products that automate security and ensure organizations are compliant in the cloud. While continuous monitoring, automation, and proper configuration significantly simplify cloud security management, CSPM solutions offer even more. CSPM tools provide deep insights into your cloud environment by: Identifying unused resources that drain your budgetMapping security team workflows to reveal inefficienciesVerifying the integrity of new systemsPinpointing the most used technologies Despite these benefits, there are important challenges and considerations that enterprises need to address. In this article, we discuss how to navigate these complexities, explore the key challenges of implementing CSPM, and provide insights into maximizing its benefits for effective cloud security management. Foundational Pillars of Enterprise CSPM: The Challenges A security baseline sets the minimum security standards that your organization's technology must meet. However, it's important to note that despite the security baseline being a foundational element, it is not the only one. A comprehensive security program also includes specific security controls, which are the technical and operational measures you implement to meet the baseline standards. Gartner defines CSPM as a solution that "uses standard frameworks, regulations, and policies to identify and assess risks in cloud services configurations." Although CSPM solutions are essential for managing complex, perimeterless multi-cloud environments, they come with their own set of challenges. More than anything, the challenge is to shift away from traditional, perimeter-based security models toward a proactive, adaptive approach that prioritizes continuous monitoring and rapid response. The following challenges, compounded by the scale and dynamism of modern cloud infrastructures, make the effective deployment of CSPM solutions a significant task. Asset Inventory and Control Most enterprise leaders are aware of the challenges in securing assets within a hybrid environment. The ephemeral nature of cloud assets makes it difficult to establish a baseline understanding of what's running, let alone secure it. In such instances, manual inventory checks turn out to be too slow and error prone. Basic tagging can provide some visibility, but it's easily bypassed or forgotten. Given the fundamental issues of securing dynamic assets, several scenarios can impact effective inventory management: Shadow IT. A developer's experiment with new virtual machines or storage buckets can become a security risk if the commissioned resources are not tracked and decommissioned properly. Unmanaged instances and databases left exposed can not only introduce vulnerabilities but also make it difficult to accurately assess your organization's overall security risk.Configuration drift. Automated scripts and manual updates can inadvertently alter configurations, such as opening a port to the public internet. Over time, these changes may introduce vulnerabilities or compliance issues that remain unnoticed until it's too late.Data blind spots. Sensitive data often gets replicated across multiple regions and services, accessed by numerous users and applications. This complex data landscape complicates efforts to track sensitive information, enforce access controls, and maintain regulatory compliance. Identity and Access Management at Scale Access privileges, managed through identity and access management (IAM), remain the golden keys to an enterprise's prime assets: its data and systems. A single overlooked permission within IAM could grant unauthorized access to critical data, while over-privileged accounts become prime targets for attackers. Traditional security measures, which often rely on static, predefined access controls and a focus on perimeter defenses, cannot keep up with this pace of change and are inadequate for securing distributed workforces and cloud environments. Quite naturally, the risk of IAM misconfigurations amplifies with scale. This complexity is further amplified by the necessity to integrate various systems and services, each with its own set of permissions and security requirements. Table 1. Advanced IAM challenges and their impact CategoryChallengesImpactIdentity federationCombining identities across systems and domains; establishing and maintaining trust with external identity providersIncreased administrative overhead; security vulnerabilitiesPrivileged account analyticsTracking and analyzing activities of privileged accounts; requiring advanced analytics to identify suspicious behaviorHigher risk of undetected threats; increased false positivesAccess governanceApplying access policies consistently; conducting regular reviews and certificationsInconsistent policy enforcement; resource intensive and prone to delaysMulti-factor authentication (MFA)Ensuring widespread use of MFA; implementing MFA across various systemsUser resistance or improper use; integration difficulties with existing workflows and systemsRole-based access control (RBAC)Defining and managing roles accurately; preventing role sprawlManagement complexity; increased administrative load Data Protection Effective data protection in the cloud requires a multi-layered approach that spans the entire data lifecycle — from storage to transmission and processing. While encryption is a fundamental component of this strategy, real-world breaches like the 2017 Equifax incident, where attackers exploited a vulnerability in unpatched software to access unencrypted data, underscore that encryption alone is insufficient. Even with robust encryption, data can be exposed when decrypted for processing or if encryption keys are compromised. Given these limitations, standards like GDPR and HIPAA demand more than just encryption. These include data loss prevention (DLP) solutions that detect and block unauthorized data transfers or tokenization and masking practices to add extra layers of protection by replacing or obscuring sensitive data. Yet these practices are not without their challenges. Fine-tuning DLP policies to minimize false positives and negatives can be a time-consuming process, and monitoring sensitive data in use (when it's unencrypted) presents technical hurdles. On the other hand, tokenization may introduce latency in applications that require real-time data processing, while masking can hinder data analysis and reporting if not carefully implemented. Network Security for a Distributed Workforce and Cloud-Native Environments The distributed nature of modern workforces means that employees are accessing the network from various locations and devices, often outside the traditional corporate perimeter. This decentralization complicates the enforcement of consistent network security policies and makes it challenging to monitor and manage network traffic effectively. CSPM solutions must adapt to this dispersed access model, ensuring that security policies are uniformly applied and that all endpoints are adequately protected. In cloud-native environments, cloud resources such as containers, microservices, and serverless functions require specialized security approaches. Traditional security measures that rely on fixed network boundaries are ineffective in such environments. CSPM solutions must adapt to this dispersed access model, ensuring that security policies are uniformly applied and that all endpoints are adequately protected. It is also common for enterprises to use a combination of legacy and modern security solutions, each with its own management interface and data format. The massive volume of data and network traffic generated in such large-scale, hybrid environments can be overwhelming. A common challenge is implementing scalable solutions that can handle high throughput and provide actionable insights without introducing latency. Essential Considerations and Challenge Mitigations for Enterprise-Ready CSPM A CSPM baseline outlines the essential security requirements and features needed to enhance and sustain security for all workloads of a cloud stack. Although often associated with IaaS (Infrastructure as a Service), CSPM can also be used to improve security and compliance in SaaS (Software as a Service) and PaaS (Platform as a Service) environments. To advance a baseline, organizations should incorporate policies that define clear boundaries. The primary objective of the baseline should be to serve as the standard for measuring your security level. The baseline should encompass not only technical controls but also the following operational aspects of managing and maintaining the security posture. Infrastructure as Code for Security Infrastructure as Code (IaC) involves defining and managing infrastructure using code, just like you would with software applications. With this approach, incorporating security into your IaC strategy means treating security policies with the same rigor as your infrastructure definitions. Enforcing policies as code enables automated enforcement of security standards throughout your infrastructure's lifecycle. Designing IaC templates with security best practices in mind can help you ensure that security is baked into your infrastructure from the outset. As an outcome, every time you deploy or update your asset inventory, your security policies are automatically applied. The approach considerably reduces the risk of human error while ensuring consistent application of security measures across your entire cloud environment. When designing IaC templates with security policies, consider the following: Least privilege principle. Administer the principle of least privilege by granting users and applications only the required permissions to perform their tasks. Secure defaults. Ensure that your IaC templates incorporate secure default configurations for resources like virtual machines, storage accounts, and network interfaces from the start.Automated security checks. Integrate automated security testing tools into your IaC pipeline to scan your infrastructure templates for potential vulnerabilities, misconfigurations, and compliance violations before deployment. Threat Detection and Response To truly understand and protect your cloud environment, leverage logs and events for a comprehensive view of your security landscape. Holistic visibility allows for deeper analysis of threat patterns, enabling you to uncover hidden misconfigurations and vulnerable endpoints that might otherwise go unnoticed. But detecting threats is just the first step. To effectively counter them, playbooks are a core part of any CSPM strategy that eventually utilize seamless orchestration and automation to speed up remediation times. Define playbooks that outline common response actions, streamline incident remediation, and reduce the risk of human error. For a more integrated defense strategy, consider utilizing extended detection and response to correlate security events across endpoints, networks, and cloud environments. To add another layer of security, consider protecting against ransomware with immutable backups that can't be modified. These backups lock data in a read-only state, preventing alteration or deletion by ransomware. A recommended CSPM approach involves write once, read many storage that ensures data remains unchangeable once written. Implement snapshot-based backups with immutable settings to capture consistent, point-in-time data images. Combine this with air-gapped storage solutions to disconnect backups from the network, preventing ransomware access. Cloud-Native Application Protection Platforms A cloud-native application protection platform (CNAPP) is a security solution specifically designed to protect applications built and deployed in cloud environments. Unlike traditional security tools, CNAPPs address the unique challenges of cloud-native architectures, such as microservices, containers, and serverless functions. When evaluating a CNAPP, assess its scalability to ensure it can manage your growing cloud infrastructure, increasing data volumes, and dynamic application architectures without compromising performance. The solution must be optimized for high-throughput environments and provide low-latency security monitoring to maintain efficiency. As you consider CNAPP solutions, remember that a robust CSPM strategy relies on continuous monitoring and automated remediation. Implement tools that offer real-time visibility into cloud configurations and security events, with immediate alerts for deviations. Integrate these tools with your CSPM platform to help you with a thorough comparison of the security baseline. Automated remediation should promptly address issues, but is your enterprise well prepared to tackle threats as they emerge? Quite often, automated solutions alone fall short in these situations. Many security analysts advocate incorporating red teaming and penetration testing as part of your CSPM strategy. Red teaming simulates real-world attacks to test how well your security holds up against sophisticated threats to identify vulnerabilities that automated tools would commonly miss. Meanwhile, regular penetration testing offers a deeper dive into your cloud infrastructure and applications, revealing critical weaknesses in configurations, access controls, and data protection. Conclusion With more people and businesses using the cloud, the chances of security problems, both deliberate and accidental, are on the rise. While data breaches are a constant threat, most mistakes still come from simple errors in how cloud systems are set up and from people making avoidable mistakes. In a security-first culture, leaders must champion security as a core component of the business strategy. After all, they are the ones responsible for building and maintaining customer trust by demonstrating a strong commitment to safeguarding data and business operations. The ways that cloud security can be compromised are always changing, and the chances of accidental exposure are growing. But a strong and flexible CSPM system can protect you and your company with quick, automatic responses to almost all cyber threats. This is an excerpt from DZone's 2024 Trend Report, Enterprise Security: Reinforcing Enterprise Application Defense.Read the Free Report
In modern software development, containerization offers an isolated and consistent environment, which is crucial for maintaining parity between development and production setups. This guide provides a comprehensive walkthrough on creating a local development environment using IntelliJ IDEA, DevContainers, and Amazon Linux 2023 for Java development. Why Use DevContainers? What Are DevContainers? DevContainers are a feature provided by Visual Studio Code and other IDEs like IntelliJ IDEA through extensions. They allow you to define a consistent and reproducible development environment using Docker containers. By encapsulating the development environment, you ensure that all team members work in an identical setup, avoiding the "it works on my machine" problem. Benefits of DevContainers Consistency: Every developer uses the same development environment, eliminating discrepancies due to different setups.Isolation: Dependencies and configurations are isolated from the host machine, preventing conflicts.Portability: Easily share development environments through version-controlled configuration files.Scalability: Quickly scale environments by creating new containers or replicating existing ones. Diagram of DevContainers Workflow Plain Text +-------------------+ | Developer Machine | +-------------------+ | | Uses v +-----------------------+ | Development Tools | | (IntelliJ, VS Code) | +-----------------------+ | | Connects to v +-----------------------+ | DevContainer | | (Docker Container) | +-----------------------+ | | Runs v +-----------------------+ | Development Project | | (Amazon Linux 2023, | | Java, Dependencies) | +-----------------------+ Step-By-Step Guide Prerequisites Before starting, ensure you have the following installed on your machine: Docker: Install DockerIntelliJ IDEA: Download IntelliJ IDEAVisual Studio Code (optional, for DevContainer configuration): Download VS Code Step 1: Setting Up Docker and Amazon Linux 2023 Container 1. Pull Amazon Linux 2023 Image Open a terminal and pull the Amazon Linux 2023 image from Docker Hub: Shell docker pull amazonlinux:2023 2. Create a Dockerfile Create a directory for your project and inside it, create a Dockerfile: Dockerfile FROM amazonlinux:2023 # Install necessary packages RUN yum update -y && \ yum install -y java-17-openjdk-devel git vim # Set environment variables ENV JAVA_HOME /usr/lib/jvm/java-17-openjdk ENV PATH $JAVA_HOME/bin:$PATH # Create a user for development RUN useradd -ms /bin/bash developer USER developer WORKDIR /home/developer 3. Build the Docker Image Build the Docker image using the following command: Shell docker build -t amazonlinux-java-dev:latest . Step 2: Configuring DevContainers 1. Create a DevContainer Configuration Inside your project directory, create a .devcontainer directory. Within it, create a devcontainer.json file: JSON { "name": "Amazon Linux 2023 Java Development", "image": "amazonlinux-java-dev:latest", "settings": { "java.home": "/usr/lib/jvm/java-17-openjdk", "java.jdt.ls.java.home": "/usr/lib/jvm/java-17-openjdk" }, "extensions": [ "vscjava.vscode-java-pack" ], "postCreateCommand": "git clone https://github.com/your-repo/your-project ." } 2. Optional: Configure VS Code for DevContainers If using VS Code, ensure the DevContainers extension is installed. Open your project in VS Code and select "Reopen in Container" when prompted. Step 3: Setting Up IntelliJ IDEA 1. Open IntelliJ IDEA Open IntelliJ IDEA and navigate to File > New > Project from Existing Sources.... Select your project directory. 2. Configure Remote Development IntelliJ offers remote development capabilities, but since we're using DevContainers, we'll set up the project to work with our local Docker container. 3. Configure Java SDK Navigate to File > Project Structure > Project.Click New... under Project SDK, then select JDK and navigate to /usr/lib/jvm/java-17-openjdk within your Docker container.Alternatively, you can configure this through the terminal by running: Shell docker exec -it <container_id> /bin/bash ... and then configuring the path inside the container. 4. Import Project IntelliJ should automatically detect the project settings. Make sure the project SDK is set to the Java version inside the container. Step 4: Running and Debugging Your Java Application 1. Run Configuration Navigate to Run > Edit Configurations....Click the + button and add a new Application configuration.Set the main class to your main application class.Set the JRE to the one configured inside the container. 2. Run the Application You should now be able to run and debug your Java application within the containerized environment directly from IntelliJ. Step 5: Integrating With Git 1. Clone Repository If not already cloned, use the following command to clone your repository inside the container: Shell git clone https://github.com/your-repo/your-project . 2. Configure Git in IntelliJ Navigate to File > Settings > Version Control > Git.Ensure the path to the Git executable is correctly set, usually /usr/bin/git within the container. Conclusion By following this guide, you now have a robust, isolated development environment for Java development using IntelliJ, DevContainers, and Amazon Linux 2023. This setup ensures consistency across development and production, reducing the "it works on my machine" syndrome and improving overall development workflow efficiency. Remember, containerization and DevContainers are powerful tools that can significantly streamline your development process. Happy coding!
In this third part of our CDK series, the project cdk-quarkus-s3, in the same GIT repository, will be used to illustrate a couple of advanced Quarkus to AWS integration features, together with several tricks specific to RESTeasy which is, as everyone knows, the RedHat implementation of Jakarta REST specifications. Let's start by looking at the project's pom.xml file which drives the Maven build process. You'll see the following dependency: ... <dependency> <groupId>io.quarkiverse.amazonservices</groupId> <artifactId>quarkus-amazon-s3</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-amazon-lambda-http</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-jackson</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client</artifactId> </dependency> ... <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>netty-nio-client</artifactId> </dependency> <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>url-connection-client</artifactId> </dependency> ... The first dependency in the listing above, quarkus-amazon-s3 is a Quarkus extension allowing your code to act as an AWS S3 client and to store and delete objects in buckets or implement backup and recovery strategies, archive data, etc. The next dependency, quarkus-amazon-lambda-http, is another Quarkus extension that aims at supporting the AWS HTTP Gateway API. As the reader already knows from the two previous parts of this series, with Quarkus, one can deploy a REST API as AWS Lambda using either AWS HTTP Gateway API or AWS REST Gateway API. Here we'll be using the former one, less expansive, hence the mentioned extension. If we wanted to use the AWS REST Gateway API, then we would have had to replace the quarkus-amazon-lambda-http extension by the quarkus-amazon-lambda-rest one. What To Expect In this project, we'll be using Quarkus 3.11 which, at the time of this writing, is the most recent release. Some of the RESTeasy dependencies have changed, compared with former versions, hence the dependency quarkus-rest-jackson which replaces now the quarkus-resteasy one, used in 3.10 and before. Also, the quarkus-rest-client extension, implementing the Eclipse MP REST Client specifications, is needed for test purposes, as we will see in a moment. Last but not least, the url-connection-client Quarkus extension is needed because the MP REST Client implementation uses it by default and, consequently, it has to be included in the build process. Now, let's look at our new REST API. Open the Java class S3FileManagementAPI in the cdk-quarkus-s3 project and you'll see that it defines three operations: download file, upload file, and list files. All three use the same S3 bucket created as a part of the CDK application's stack. Java @Path("/s3") public class S3FileManagementApi { @Inject S3Client s3; @ConfigProperty(name = "bucket.name") String bucketName; @POST @Path("upload") @Consumes(MediaType.MULTIPART_FORM_DATA) public Response uploadFile(@Valid FileMetadata fileMetadata) throws Exception { PutObjectRequest request = PutObjectRequest.builder() .bucket(bucketName) .key(fileMetadata.filename) .contentType(fileMetadata.mimetype) .build(); s3.putObject(request, RequestBody.fromFile(fileMetadata.file)); return Response.ok().status(Response.Status.CREATED).build(); } ... } Explaining the Code The code fragment above reproduces only the upload file operation, the other two being very similar. Observe how simple the instantiation of the S3Client is by taking advantage of the Quarkus CDI which avoids the need for several boilerplate lines of code. Also, we're using the Eclipse MP Config specification to define the name of the destination S3 bucket. Our endpoint uploadFile() accepts POST requests and consumes MULTIPART_FORM_DATA MIME data is structured in two distinct parts, one for the payload and the other one containing the file to be uploaded. The endpoint takes an input parameter of the class FileMetadata, shown below: Java public class FileMetadata { @RestForm @NotNull public File file; @RestForm @PartType(MediaType.TEXT_PLAIN) @NotEmpty @Size(min = 3, max = 40) public String filename; @RestForm @PartType(MediaType.TEXT_PLAIN) @NotEmpty @Size(min = 10, max = 127) public String mimetype; ... } This class is a data object grouping the file to be uploaded together with its name and MIME type. It uses the @RestForm RESTeasy specific annotation to handle HTTP requests that have multipart/form-dataas their content type. The use of jakarta.validation.constraints annotations are very practical as well for validation purposes. To come back at our endpoint above, it creates a PutObjectRequest having as input arguments the destination bucket name, a key that uniquely identifies the stored file in the bucket, in this case, the file name, and the associated MIME type, for example TEXT_PLAIN for a text file. Once the PutObjectRequest created it is sent via an HTTP PUT request to the AWS S3 service. Please notice how easy the file to be uploaded is inserted into the request body using the RequestBody.fromFile(...) statement. That's all as far as the REST API exposed as an AWS Lambda function is concerned. Now let's look at what's new in our CDK application's stack: Java ... HttpApi httpApi = HttpApi.Builder.create(this, "HttpApiGatewayIntegration") .defaultIntegration(HttpLambdaIntegration.Builder.create("HttpApiGatewayIntegration", function).build()).build(); httpApiGatewayUrl = httpApi.getUrl(); CfnOutput.Builder.create(this, "HttpApiGatewayUrlOutput").value(httpApi.getUrl()).build(); ... These lines have been added to the LambdaWithBucketConstruct class in the cdk-simple-construct project. We want the Lambda function we're creating in the current stack to be located behind an HTTP Gateway and backups it. This might have some advantages. So we need to create an integration for our Lambda function. The notion of integration, as defined by AWS, means providing a backend for an API endpoint. In the case of the HTTP Gateway, one or more backends should be provided for each API Gateway's endpoints. The integrations have their own request and responses, distinct from the ones of the API itself. There are two integration types: Lambda integrations where the backend is a Lambda function;HTTP integrations where the backend might be any deployed web application; In our example, we're using Lambda integration, of course. There are two types of Lambda integrations as well: Lambda proxy integration where the definition of the integration's request and response, as well as their mapping to/from the original ones, aren't required as they are automatically provided;Lambda non-proxy integration where we need to explicitly specify how the incoming request data is mapped to the integration request and how the resulting integration response data is mapped to the method response; For simplicity's sake, we're using the 1st case in our project. This is what the statement .defaultIntegration(...) above is doing. Once the integration is created, we need to display the URL of the newly created API Gateway, which our Lambda function is the backup. This way, in addition to being able to directly invoke our Lambda function, as we did previously, we'll be able to do it through the API Gateway. And in a project with several dozens of REST endpoints, it's very important to have a single contact point, where to apply security policies, logging, journalisation, and other cross-cutting concerns. The API Gateway is ideal as a single contact point. The project comes with a couple of unit and integration tests. For example, the class S3FileManagementTest performs unit testing using REST Assured, as shown below: Java @QuarkusTest @TestMethodOrder(MethodOrderer.OrderAnnotation.class) public class S3FileManagementTest { private static File readme = new File("./src/test/resources/README.md"); @Test @Order(10) public void testUploadFile() { given() .contentType(MediaType.MULTIPART_FORM_DATA) .multiPart("file", readme) .multiPart("filename", "README.md") .multiPart("mimetype", MediaType.TEXT_PLAIN) .when() .post("/s3/upload") .then() .statusCode(HttpStatus.SC_CREATED); } @Test @Order(20) public void testListFiles() { given() .when().get("/s3/list") .then() .statusCode(200) .body("size()", equalTo(1)) .body("[0].objectKey", equalTo("README.md")) .body("[0].size", greaterThan(0)); } @Test @Order(30) public void testDownloadFile() throws IOException { given() .pathParam("objectKey", "README.md") .when().get("/s3/download/{objectKey}") .then() .statusCode(200) .body(equalTo(Files.readString(readme.toPath()))); } } This unit test starts by uploading the file README.md to the S3 bucket defined for the purpose. Then it lists all the files present in the bucket and finishes by downloading the file just uploaded. Please notice the following lines in the application.properties file: Plain Text bucket.name=my-bucket-8701 %test.quarkus.s3.devservices.buckets=${bucket.name} The first one defines the names of the destination bucket and the second one automatically creates it. This only works while executed via the Quarkus Mock server. While this unit test is executed in the Maven test phase, against a localstackinstance run by testcontainers, automatically managed by Quarkus, the integration one, S3FileManagementIT, is executed against the real AWS infrastructure, once our CDK application is deployed. The integration tests use a different paradigm and, instead of REST Assured, very practical for unit tests, they take advantage of the Eclipse MP REST Client specifications, implemented by Quarkus, as shown in the following snippet: Java @QuarkusTest @TestMethodOrder(MethodOrderer.OrderAnnotation.class) public class S3FileManagementIT { private static File readme = new File("./src/test/resources/README.md"); @Inject @RestClient S3FileManagementClient s3FileManagementTestClient; @Inject @ConfigProperty(name = "base_uri/mp-rest/url") String baseURI; @Test @Order(40) public void testUploadFile() throws Exception { Response response = s3FileManagementTestClient.uploadFile(new FileMetadata(readme, "README.md", MediaType.TEXT_PLAIN)); assertThat(response).isNotNull(); assertThat(response.getStatusInfo().toEnum()).isEqualTo(Response.Status.CREATED); } ... } We inject S3FileManagementClient which is a simple interface defining our API endpoints and Quarkus does the rest. It generates the required client code. We just have to invoke endpoints on this interface, for example uploadFile(...), and that's all. Have a look at S3FileManagementClient, in the cdk-quarkus-s3 project, to see how everything works and please notice how the annotation @RegisterRestClient defines a configuration key, named base_uri, used further in the deploy.sh script. Now, to test against the AWS real infrastructure, you need to execute the deploy.sh script, as follows: Shell $ cd cdk $ ./deploy.sh cdk-quarkus/cdk-quarkus-api-gateway cdk-quarkus/cdk-quarkus-s3 This will compile and build the application, execute the unit tests, deploy the CloudFormation stack on AWS, and execute the integration tests against this infrastructure. At the end of the execution, you should see something like: Plain Text Outputs: QuarkusApiGatewayStack.FunctionURLOutput = https://<generated>.lambda-url.eu-west-3.on.aws/ QuarkusApiGatewayStack.LambdaWithBucketConstructIdHttpApiGatewayUrlOutput = https://<generated>.execute-api.eu-west-3.amazonaws.com/ Stack ARN: arn:aws:cloudformation:eu-west-3:...:stack/QuarkusApiGatewayStack/<generated> Now, in addition to the Lambda function URL that you've already seen in our previous examples, you can see how the API HTTP Gateway URL, that you can use now for testing purposes, instead of the Lambda one. An E2E test case, exported from Postman (S3FileManagementPostmanIT), is provided as well. It is executed via the Docker image postman/newman:latest, running in testcontainers. Here is a snippet: Java @QuarkusTest public class S3FileManagementPostmanIT { ... private static GenericContainer<?> postman = new GenericContainer<>("postman/newman") .withNetwork(Network.newNetwork()) .withCopyFileToContainer(MountableFile.forClasspathResource("postman/AWS.postman_collection.json"), "/etc/newman/AWS.postman_collection.json") .withStartupCheckStrategy(new OneShotStartupCheckStrategy().withTimeout(Duration.ofSeconds(10))); @Test public void run() { String apiEndpoint = System.getenv("API_ENDPOINT"); assertThat(apiEndpoint).isNotEmpty(); postman.withCommand("run", "AWS.postman_collection.json", "--global-var base_uri=" + apiEndpoint.substring(8).replaceAll(".$", "")); postman.start(); LOG.info(postman.getLogs()); assertThat(postman.getCurrentContainerInfo().getState().getExitCodeLong()).isZero(); postman.stop(); } } Conclusion As you can see, after starting the postman/newman:latest image with testcontainers, we run the E2E test case exported from Postman by passing to it the option global-vars such that to initialize the global variable labeled base_uri to the value of the REST API URL saved by the deploy.sh script in the API-ENDPOINT environment variable. Unfortunately, due probably to a bug, the postman/newman image doesn't recognize this option, accordingly, waiting for this issue to be fixed, this test is disabled for now. You can, of course, import the file AWS.postman_collection.json in Postman and run it this way after having replaced the global variable {{base_uri} with the current value of the API URL generated by AWS. Enjoy!
In today’s rapidly evolving enterprise landscape, managing and synchronizing data across complex environments is a significant challenge. As businesses increasingly adopt multi-cloud strategies to enhance resilience and avoid vendor lock-in, they are also turning to edge computing to process data closer to the source. This combination of multi-cloud and edge computing offers significant advantages, but it also presents unique challenges, particularly in ensuring seamless and reliable data synchronization across diverse environments. In this post, we’ll explore how the open-source KubeMQ’s Java SDK provides an ideal solution for these challenges. We’ll focus on a real-life use case involving a global retail chain that uses KubeMQ to manage inventory data across its multi-cloud and edge infrastructure. Through this example, we’ll demonstrate how the solution enables enterprises to achieve reliable, high-performance data synchronization, transforming their operations. The Complexity of Multi-Cloud and Edge Environments Enterprises today are increasingly turning to multi-cloud architectures to optimize costs, enhance system resilience, and avoid being locked into a single cloud provider. However, managing data across multiple cloud providers is far from straightforward. The challenge is compounded when edge computing enters the equation. Edge computing involves processing data closer to where it’s generated, such as in IoT devices or remote locations, reducing latency and improving real-time decision-making. When multi-cloud and edge computing are combined, the result is a highly complex environment where data needs to be synchronized not just across different clouds but also between central systems and edge devices. Achieving this requires a robust messaging infrastructure capable of managing these complexities while ensuring data consistency, reliability, and performance. KubeMQ’s Open-Source Java SDK: A Unified Solution for Messaging Across Complex Environments KubeMQ is a messaging and queue management solution designed to handle modern enterprise infrastructure. The KubeMQ Java SDK is particularly appropriate for developers working within Java environments, offering a versatile toolset for managing messaging across multi-cloud and edge environments. Key features of the KubeMQ Java SDK include: All messaging patterns in one SDK: KubeMQ’s Java SDK supports all major messaging patterns, providing developers with a unified experience that simplifies integration and development.Utilizes GRPC streaming for high performance: The SDK leverages GRPC streaming to deliver high performance, making it suitable for handling large-scale, real-time data synchronization tasks.Simplicity and ease of use: With numerous code examples and encapsulated logic, the SDK simplifies the development process by managing complexities typically handled on the client side. Real-Life Use Case: Retail Inventory Management Across Multi-Cloud and Edge To illustrate how to use KubeMQ’s Java SDK, let’s consider a real-life scenario involving a global retail chain. This retailer operates thousands of stores worldwide, each equipped with IoT devices that monitor inventory levels in real-time. The company has adopted a multi-cloud strategy to enhance resilience and avoid vendor lock-in while leveraging edge computing to process data locally at each store. The Challenge The retailer needs to synchronize inventory data from thousands of edge devices across different cloud providers. Ensuring that every store has accurate, up-to-date stock information is critical for optimizing the supply chain and preventing stockouts or overstock situations. This requires a robust, high-performance messaging system that can handle the complexities of multi-cloud and edge environments. The Solution Using the KubeMQ Java SDK, the retailer implements a messaging system that synchronizes inventory data across its multi-cloud and edge infrastructure. Here’s how the solution is built: Store Side Code Step 1: Install KubeMQ SDK Add the following dependency to your Maven pom.xml file: XML <dependency> <groupId>io.kubemq.sdk</groupId> <artifactId>kubemq-sdk-Java</artifactId> <version>2.0.0</version> </dependency> Step 2: Synchronizing Inventory Data Across Multi-Clouds Java import io.kubemq.sdk.queues.QueueMessage; import io.kubemq.sdk.queues.QueueSendResult; import io.kubemq.sdk.queues.QueuesClient; import java.util.UUID; public class StoreInventoryManager { private final QueuesClient client1; private final QueuesClient client2; private final String queueName = "store-1"; public StoreInventoryManager() { this.client1 = QueuesClient.builder() .address("cloudinventory1:50000") .clientId("store-1") .build(); this.client2 = QueuesClient.builder() .address("cloudinventory2:50000") .clientId("store-1") .build(); } public void sendInventoryData(String inventoryData) { QueueMessage message = QueueMessage.builder() .channel(queueName) .body(inventoryData.getBytes()) .metadata("Inventory Update") .id(UUID.randomUUID().toString()) .build(); try { // Send to cloudinventory1 QueueSendResult result1 = client1.sendQueuesMessage(message); System.out.println("Sent to cloudinventory1: " + result1.isError()); // Send to cloudinventory2 QueueSendResult result2 = client2.sendQueuesMessage(message); System.out.println("Sent to cloudinventory2: " + result2.isError()); } catch (RuntimeException e) { System.err.println("Failed to send inventory data: " + e.getMessage()); } } public static void main(String[] args) { StoreInventoryManager manager = new StoreInventoryManager(); manager.sendInventoryData("{'item': 'Laptop', 'quantity': 50}"); } } Cloud Side Code Step 1: Install KubeMQ SDK Add the following dependency to your Maven pom.xml file: XML <dependency> <groupId>io.kubemq.sdk</groupId> <artifactId>kubemq-sdk-Java</artifactId> <version>2.0.0</version> </dependency> Step 2: Managing Data on Cloud Side Java import io.kubemq.sdk.queues.QueueMessage; import io.kubemq.sdk.queues.QueuesPollRequest; import io.kubemq.sdk.queues.QueuesPollResponse; import io.kubemq.sdk.queues.QueuesClient; public class CloudInventoryManager { private final QueuesClient client; private final String queueName = "store-1"; public CloudInventoryManager() { this.client = QueuesClient.builder() .address("cloudinventory1:50000") .clientId("cloudinventory1") .build(); } public void receiveInventoryData() { QueuesPollRequest pollRequest = QueuesPollRequest.builder() .channel(queueName) .pollMaxMessages(1) .pollWaitTimeoutInSeconds(10) .build(); try { while (true) { QueuesPollResponse response = client.receiveQueuesMessages(pollRequest); if (!response.isError()) { for (QueueMessage msg : response.getMessages()) { String inventoryData = new String(msg.getBody()); System.out.println("Received inventory data: " + inventoryData); // Process the data here // Acknowledge the message msg.ack(); } } else { System.out.println("Error receiving messages: " + response.getError()); } // Wait for a bit before polling again Thread.sleep(1000); } } catch (RuntimeException | InterruptedException e) { System.err.println("Failed to receive inventory data: " + e.getMessage()); } } public static void main(String[] args) { CloudInventoryManager manager = new CloudInventoryManager(); manager.receiveInventoryData(); } } The Benefits of Using KubeMQ for Retail Inventory Management Implementing KubeMQ’s Java SDK in this retail scenario offers several benefits: Improved inventory accuracy: The retailer can ensure that all stores have accurate, up-to-date stock information, reducing the risk of stockouts and overstock.Optimized supply chain: Accurate data flow from the edge to the cloud streamlines the supply chain, reducing waste and improving response times.Enhanced resilience: The multi-cloud and edge approach provides a resilient infrastructure that can adapt to regional disruptions or cloud provider issues. Conclusion KubeMQ’s open-source Java SDK provides a powerful solution for enterprises looking to manage data across complex multi-cloud and edge environments. In the retail use case discussed, the SDK enables seamless data synchronization, transforming how the retailer manages its inventory across thousands of stores worldwide. For more information and support, check out their quick start, documentation, tutorials, and community forums. Have a really great day!
Cross-Origin Resource Sharing (CORS) is an essential security mechanism utilized by web browsers, allowing for regulated access to server resources from origins that differ in domain, protocol, or port. In the realm of APIs, especially when utilizing AWS API Gateway, configuring CORS is crucial to facilitate access for web applications originating from various domains while mitigating potential security risks. This article aims to provide a comprehensive guide on CORS and integrating AWS API Gateway through CloudFormation. It will emphasize the significance of CORS, the development of authorization including bearer tokens, and the advantages of selecting optional methods in place of standard GET requests. Why CORS Matters In the development of APIs intended for access across various domains, CORS is essential in mitigating unauthorized access. By delineating the specific domains permitted to interact with your API, you can protect your resources from Cross-Site Request Forgery (CSRF) attacks while allowing valid cross-origin requests. Benefits of CORS Security: CORS plays a crucial role in regulating which external domains can access your resources, thereby safeguarding your API against harmful cross-origin requests. Flexibility: CORS allows you to define varying levels of access (such as methods like GET, POST, DELETE, etc.) for different origins, offering adaptability based on your specific requirements. User experience: Implementing CORS enhances user experience by allowing users to seamlessly access resources from multiple domains without encountering access-related problems. Before we proceed with setting up CORS, we need to understand the need to use optional methods over GET. This comparison helps in quickly comparing the aspects of using GET versus optional methods (PUT, POST, OPTIONS) in API requests. ReasonGETOptional Methods (POST, PUT, OPTIONS)SecurityGET requests are visible in the browser's address bar and can be cached, making it less secure for sensitive information.Optional methods like POST and PUT are not visible in the address bar and are not cached, providing more security for sensitive data.FlexibilityGET requests are limited to sending data via the URL, which restricts the complexity and size of data that can be sent.Optional methods allow sending complex data structures in the request body, providing more flexibility.Idempotency and SafetyGET is idempotent and considered safe, meaning it does not modify the state of the resource.POST and PUT are used for actions that modify data, and OPTIONS are used for checking available methods.CORS PreflightGET requests are not typically used for CORS preflight checks.OPTIONS requests are crucial for CORS preflight checks, ensuring that the actual request can be made. Comparison between POST and PUT methods, the purposes and behavior: AspectPOSTPUTPurposeUsed to create a new resource.Used to update an existing resource or create it if it doesn't exist.IdempotencyNot idempotent; multiple identical requests may create multiple resources.Idempotent; multiple identical requests will not change the outcome beyond the initial change.Resource LocationThe server decides the resource's URI, typically returning it in the response.The client specifies the resource's URI.Data HandlingTypically used when the client does not know the URI of the resource in advance.Typically used when the client knows the URI of the resource and wants to update it.Common Use CaseCreating new records, such as submitting a form to create a new user.Updating existing records, such as editing user information.CachingResponses to POST requests are generally not cached.Responses to PUT requests can be cached as the request should result in the same outcome.ResponseUsually returns a status code of 201 (Created) with a location header pointing to the newly created resource.Usually returns a status code of 200 (OK) or 204 (No Content) if the update is successful. Setting Up CORS in AWS API Gateway Using CloudFormation Configuring CORS in AWS API Gateway can be accomplished manually via the AWS Management Console; however, automating this process with CloudFormation enhances both scalability and consistency. Below is a detailed step-by-step guide: 1. Define the API Gateway in CloudFormation Start by defining the API Gateway in your CloudFormation template: YAML Resources: MyApi: Type: AWS::ApiGateway::RestApi Properties: Name: MyApi 2. Create Resources and Methods Define the resources and methods for your API. For example, create a resource for /items and a GET method: YAML ItemsResource: Type: AWS::ApiGateway::Resource Properties: ParentId: !GetAtt MyApi.RootResourceId PathPart: items RestApiId: !Ref MyApi GetItemsMethod: Type: AWS::ApiGateway::Method Properties: AuthorizationType: NONE HttpMethod: GET ResourceId: !Ref ItemsResource RestApiId: !Ref MyApi Integration: Type: MOCK IntegrationResponses: - StatusCode: 200 MethodResponses: - StatusCode: 200 3. Configure CORS Next, configure CORS for your API method by specifying the necessary headers: YAML OptionsMethod: Type: AWS::ApiGateway::Method Properties: AuthorizationType: NONE HttpMethod: OPTIONS ResourceId: !Ref ItemsResource RestApiId: !Ref MyApi Integration: Type: MOCK RequestTemplates: application/json: '{"statusCode": 200}' IntegrationResponses: - StatusCode: 200 SelectionPattern: '2..' ResponseParameters: method.response.header.Access-Control-Allow-Headers: "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'" method.response.header.Access-Control-Allow-Methods: "'*'" method.response.header.Access-Control-Allow-Origin: "'*'" MethodResponses: - StatusCode: 200 ResponseModels: { "application/json": "Empty" } ResponseParameters: method.response.header.Access-Control-Allow-Headers: false method.response.header.Access-Control-Allow-Methods: false method.response.header.Access-Control-Allow-Origin: false Incorporating Authorization Implementing authorization within your API methods guarantees that access to specific resources is restricted to authenticated and authorized users. The AWS API Gateway offers various authorization options, including AWS Lambda authorizers, Cognito User Pools, and IAM roles. YAML MyAuthorizer: Type: AWS::ApiGateway::Authorizer Properties: Name: MyLambdaAuthorizer RestApiId: !Ref MyApi Type: TOKEN AuthorizerUri: arn:aws:apigateway:<region>:lambda:path/2015-03-31/functions/<lambda_arn>/invocations GetItemsMethodWithAuth: Type: AWS::ApiGateway::Method Properties: AuthorizationType: CUSTOM AuthorizerId: !Ref MyAuthorizer HttpMethod: GET ResourceId: !Ref ItemsResource RestApiId: !Ref MyApi Integration: Type: AWS_PROXY IntegrationHttpMethod: POST Uri: !Sub arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${MyFunction.Arn}/invocations MethodResponses: - StatusCode: 200 After implementation, here's how the API looks in AWS: Integration request: API Gateway Documentation can be found here: Amazon API. Conclusion Establishing CORS and integrating AWS API Gateway through CloudFormation offers an efficient and reproducible method for managing API access. By meticulously setting up CORS, you guarantee that your APIs remain secure and are accessible solely to permitted origins. Incorporating authorization adds a layer of security by limiting access to only those users who are authorized. Moreover, evaluating the advantages of utilizing optional methods instead of GET requests ensures that your API maintains both security and the flexibility necessary for managing intricate operations. The implementation of these configurations not only bolsters the security and performance of your API but also enhances the overall experience for end-users, facilitating seamless cross-origin interactions and the appropriate management of sensitive information.
Guardrails for Amazon Bedrock enables you to implement safeguards for your generative AI applications based on your use cases and responsible AI policies. You can create multiple guardrails tailored to different use cases and apply them across multiple foundation models (FM), providing a consistent user experience and standardizing safety and privacy controls across generative AI applications. Until now, Guardrails supported four policies — denied topics, content filters, sensitive information filters, and word filters. The Contextual grounding check policy (the latest one added at the time of writing) can detect and filter hallucination in model responses that are not grounded in enterprise data or are irrelevant to the users’ query. Contextual Grounding To Prevent Hallucinations The generative AI applications that we build depend on LLMs to provide accurate responses. This might be based on LLM inherent capabilities or using techniques such as RAG (Retrieval Augmented Generation). However, it's a known fact that LLMs are prone to hallucination and can end up responding with inaccurate information which impacts application reliability. The Contextual grounding check policy evaluates hallucinations using two parameters: Grounding — This checks if the model response is factually accurate based on the source and is grounded in the source. Any new information introduced in the response will be considered un-grounded.Relevance — This checks if the model response is relevant to the user query. Score Based Evaluation The result of the contextual grounding check is a set of confidence scores corresponding to grounding and relevance for each model response processed based on the source and user query provided. You can configure thresholds to filter (block) model responses based on the generated scores. These thresholds determine the minimum confidence score for the model response to be considered grounded and relevant. For example, if your grounding threshold and relevance threshold are each set at 0.6, all model responses with a grounding or relevance score of less than that will be detected as hallucinations and blocked. You may need to adjust the threshold scores based on the accuracy tolerance for your specific use case. For example, a customer-facing application in the finance domain may need a high threshold due to lower tolerance for inaccurate content. Keep in mind that a higher threshold for the grounding and relevance scores will result in more responses being blocked. Getting Started With Contextual Grounding To get an understanding of how contextual grounding checks work, I would recommend using the Amazon Bedrock Console since it makes it easy to test your Guardrail policies with different combinations of source data and prompts. Start by creating a Guardrails configuration. For this example, I have set the grounding check threshold to, relevance score threshold to 0.5 and configured the messages for blocked prompts and responses: As an example, I used this snippet of text from the 2023 Amazon shareholder letter PDF and used it as the Reference source. For the Prompt, I used: What is Amazon doing in the field of quantum computing? The nice part about using the AWS console is that not only can you see the final response (pre-configured in the Guardrail), but also the actual model response (that was blocked). In this case, the model response was relevant since it it came back with information about Amazon Braket. But the response was un-grounded since it wasn’t based on the source information, which had no data about quantum computing, or Amazon Braket. Hence the grounding score was 0.01 — much lower than the configured threshold of 0.85, which resulted in the model response getting blocked. Use Contextual Grounding Check for RAG Applications With Knowledge Bases Remember, Contextual grounding check is yet another policy and it can be leveraged anywhere Guardrails can be used. One of the key use cases is combining it with RAG applications built with Knowledge Bases for Amazon Bedrock. To do this, create a Knowledge Base. I created it using the 2023 Amazon shareholder letter PDF as the source data (loaded from Amazon S3) and the default vector database (OpenSearch Serverless collection). After the Knowledge Base has been created, sync the data source, and you should be ready to go! Let's start with a question that I know can be answered accurately: What is Amazon doing in the field of generative AI? This went well, as expected — we got a relevant and grounded response. Let's try another one: What is Amazon doing in the field of quantum computing? As you can see, the model response got blocked, and the pre-configured response (in Guardrails) was returned instead. This is because the source data does not actually contain information about quantum computing (or Amazon Braket), and a hallucinated response was prevented by the Guardrails. Combine Contextual Grounding Checks With RetrieveAndGenerate API Let’s go beyond the AWS console and see how to apply the same approach in a programmatic way. Here is an example using the RetrieveAndGenerate API, which queries a knowledge base and generates responses based on the retrieved results. I have used the AWS SDK for Python (boto3), but it will work with any of the SDKs. Before trying out the example, make sure you have configured and set up Amazon Bedrock, including requesting access to the Foundation Model(s). Python import boto3 guardrailId = "ENTER_GUARDRAIL_ID" guardrailVersion= "ENTER_GUARDRAIL_VERSION" knowledgeBaseId = "ENTER_KB_ID" modelArn = 'arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-instant-v1' def main(): client = boto3.client('bedrock-agent-runtime') response = client.retrieve_and_generate( input={ 'text': 'what is amazon doing in the field of quantum computing?' }, retrieveAndGenerateConfiguration={ 'knowledgeBaseConfiguration': { 'generationConfiguration': { 'guardrailConfiguration': { 'guardrailId': guardrailId, 'guardrailVersion': guardrailVersion } }, 'knowledgeBaseId': knowledgeBaseId, 'modelArn': modelArn, 'retrievalConfiguration': { 'vectorSearchConfiguration': { 'overrideSearchType': 'SEMANTIC' } } }, 'type': 'KNOWLEDGE_BASE' }, ) action = response["guardrailAction"] print(f'Guardrail action: {action}') finalResponse = response["output"]["text"] print(f'Final response:\n{finalResponse}') if __name__ == "__main__": main() You can also refer to the code in this Github repo. Run the example (don’t forget to enter the Guardrail ID, version, Knowledge Base ID): Python pip install boto3 python grounding.py You should get an output as such: Python Guardrail action: INTERVENED Final response: Response blocked - Sorry, the model cannot answer this question. Conclusion Contextual grounding check is a simple yet powerful technique to improve response quality in applications based on RAG, summarization, or information extraction. It can help detect and filter hallucinations in model responses if they are not grounded (factually inaccurate or add new information) in the source information or are irrelevant to the user’s query. Contextual grounding check is made available to you as a policy/configuration in Guardrails for Amazon Bedrock and can be plugged in anywhere you may be using Guardrails to enforce responsible AI for your applications. For more details, refer to the Amazon Bedrock documentation for Contextual grounding. Happy building!
One of the first decisions you’ll need to make when working with the AWS Cloud Development Kit (CDK) is choosing the language for writing your Infrastructure as Code (IaC). The CDK currently supports TypeScript, JavaScript, Python, Java, C#, and Go. Over the past few years, I’ve worked with the CDK in TypeScript, Python, and Java. While there is ample information available online for TypeScript and Python, this post aims to share my experience using Java as the language of choice for the AWS CDK. Wait…What? Use Java With the AWS CDK? Some may say that TypeScript is the most obvious language to use while working with the AWS CDK. The CDK itself is written in TypeScript and it’s also the most used language according to the 2023 CDK Community Survey. Java is coming in 3rd place with a small percentage of use. I do wonder if this still holds true given the number of responses to the survey. I’ve worked with small businesses and large enterprise organizations over the last years and I see more and more Java-oriented teams move their workloads to AWS while adopting AWS CDK as their Infrastructure as Code tool. Depending on the type of service(s) being built by these teams they may or may not have any experience with either Python or TypeScript and the Node.js ecosystem, which makes sticking to Java an easy choice. General Observations From what I’ve seen, adopting the CDK in Java is relatively easy for most of these teams as they already understand the language and the ecosystem. Integrating the CDK with their existing build tools like Maven and Gradle is well documented, which leaves them with the learning curve of understanding how to work with infrastructure as code, how to structure a CDK project, and when to use L1, L2, and L3 constructs. Compared to TypeScript the CDK stacks and constructs written in Java contain a bit more boilerplate code and therefore might feel a bit more bloated if you come from a different language. I personally don’t feel this makes the code less readable and with modern IDEs and coding assistants, I don’t feel I’m less productive. The CDK also seems to become more widely adopted in the Java community with more recent Java frameworks like Micronaut even having built-in support for AWS CDK in the framework. See for instance the following Micronaut launch configurations: Micronaut Application with API Gateway and CDK for Java runtimeMicronaut Function with API Gateway and CDK for Java runtime One of the advantages of Java is that it’s a statically typed language, which means it will catch most CDK coding errors during compile time. There are still some errors which you will only see during an actual cdk synth or cdk deploy. For instance, some constructs have required properties that will only become visible if you try to synthesize the stack, but in my experience, you will have that in other languages as well. Performance-wise, it feels like the CDK in Java is a bit slower compared to using TypeScript or any other interpreted language. I’ve not measured this, but it’s more of a gut feeling. This might have to do with the static nature of Java and its corresponding build tools and compile phase. On the other hand, it might be that the JSII runtime architecture also has an effect and how Java interacts with a JavaScript environment. Java Builders One of the biggest differences when using the AWS CDK with Java is the use of Builders. When creating constructs with TypeScript, you’re mainly using the props argument (map of configuration properties) while creating a construct. Let’s take a look at an example: TypeScript const bucket = new s3.Bucket(this,"MyBucket", { versioned: true, encryption: BucketEncryption.KMS_MANAGED }) The Java version of the above snippet uses a Builder class that follows the builder pattern for constructing the properties. If you’re unfamiliar with the Builder pattern in Java, I recommend checking out this blog post about using the Builder pattern. Depending on the CDK construct, you might be able to define a CDK resource in two different ways. In the first example, you use the Builder for the Bucket properties. Java Bucket bucket = new Bucket(this, "MyBucket", new BucketProps.Builder() .versioned(true) .encryption(BucketEncryption.KMS_MANAGED) .build()); The alternative is that constructs can have their own builder class, which makes it a little less verbose and easier to read. Java Bucket bucket = Bucket.Builder .create(this, "MyBucket") .versioned(true) .encryption(BucketEncryption.KMS_MANAGED) .build(); IDE Support Overall IDE support is really great when working with CDK in Java. I use IntelliJ IDEA on a daily basis and auto-completion really helps when using the Builder objects. As the CDK documentation is also inside the CDK Java source code, looking up documentation is really easy. It’s similar to how you would do it with any kind of other object or library. Third-Party Construct Support The CDK itself is written in TypeScript, and for each supported programming language, a specific binding is generated. This means that when a new resource or feature for an AWS service is added in the TypeScript variant of the CDK, it’s also available to developers using a Java-based CDK. Besides the default CDK constructs, there are also a lot of community-generated constructs. Construct Hub is a great place to find them. From what I’ve seen, most constructs coming out of AWS will support Java as one of the default languages. Community-supported constructs however might not. There are several popular constructs that only support TypeScript and Python. Filtering on Construct Hub for AWS CDK v2-based constructs, sorted by programming languages results in the following data. LanguageNumber of constructs librariesTypescript1164Python781.Net511Java455Go132 Depending on the type of infrastructure or third-party services you’re planning to use, you might not be able to use all available constructs. For instance, the constructs maintained by DataDog are only available in Typescript, Python, and Go. In my personal experience, though, most construct developers are open to supporting Java. Third-party constructs are based on projen and jsii, which means that adding a Java-based version is most of the time a matter of configuration in the package.json file of the project. JSON "jsii": { "outdir": "dist", "targets": { "java": { "package": "io.github.cdklabs.cdknag", "maven": { "groupId": "io.github.cdklabs", "artifactId": "cdknag" } }, "python": { "distName": "cdk-nag", "module": "cdk_nag" }, "dotnet": { "namespace": "Cdklabs.CdkNag", "packageId": "Cdklabs.CdkNag" }, "go": { "moduleName": "github.com/cdklabs/cdk-nag-go" } }, "tsc": { "outDir": "lib", "rootDir": "src" } }, (An example of how JSII is configured for the CDK NAG project) Once the configuration is in place and the artifacts have been pushed to, for instance, Maven Central, you’re good to go. When thinking about it, I once had a 3rd party construct I wanted to use that did not support Java (yet). It got added quite quickly and there was also an alternative solution for it, so I can't remember having issues with the lower number of available constructs. Examples, Tutorials, and Documentation I think it’s good to reflect on the fact that there are more CDK examples and tutorials available in TypeScript and Python compared to Java. This reflects the findings in the usage chart from the CDK Community Survey. However, reading TypeScript as a Java programmer is relatively easy (in my personal opinion). If you’re new to the AWS CDK, there is a ton of example code available on GitHub, YouTube, and numerous blog posts and tutorials. If you’re already using the CDK in combination with Java be sure to write some blog posts or tutorials, so others can see that and benefit from your knowledge! Summary Java is a very viable option when working with the AWS CDK, especially for workload teams already familiar with the language and its ecosystem. IDE support for the CDK is excellent with features like auto-completion and easy access to source code documentation. All in all, the experience is really good. Keep in mind that picking Java for your infrastructure as code all depends on the context and the environment you’re in. I would suggest picking the language that is most applicable to your specific situation. If you still need to make the choice and are already working with Java, I would definitely recommend trying it out!
In today's rapidly evolving go-to-market landscape, organizations with diverse product portfolios face intricate pricing and discounting challenges. The implementation of a robust, scalable pricing framework has become paramount to maintaining competitive edge and operational efficiency. This study delves into the strategic utilization of Salesforce CPQ's advanced features, specifically price rules and Quote Calculator Plugins (QCP), to address complex dynamic pricing scenarios. This guide presents an in-depth analysis of ten sophisticated use cases, demonstrating how these automation tools can be harnessed to create agile, responsive pricing models. By emphasizing low-code and declarative configuration methodology, this comprehensive guide provides software developers and solution architects with a blueprint to accelerate development cycles and enhance the implementation of nuanced pricing strategies. What Are Price Rules and QCP? Price Rules in Salesforce CPQ Price Rules are a feature in Salesforce CPQ that allows users to define automated pricing logic. They apply discounts, adjust prices, or add charges based on specified conditions, enabling complex pricing scenarios without custom code. To implement these complex rules in Salesforce CPQ, you'll often need to combine multiple features such as Price Rules, Price Conditions, Price Actions, Custom Fields, Formula Fields, Product Rules, and Lookup Query objects. Adequately set Evaluation event (Before/On/After calculation) and the evaluation order of price rules to avoid any row-lock or incorrect updates. QCP (Quote Calculator Plugin) QCP is a JavaScript-based customization tool in Salesforce CPQ that allows for advanced, custom pricing calculations. It provides programmatic access to the quote model, enabling complex pricing logic beyond standard CPQ features. First, you'll need to enable the QCP in your Salesforce CPQ settings. Then, you can create a new QCP script or modify an existing one. When needed, make sure QCP has access to the quote, line items, and other CPQ objects. QCP has a character limit; therefore, it is advised that it should only be used for logic which cannot be implemented with any declarative CPQ method. Additionally, you may need to use Apex code for more complex calculations or integrations with external systems. Use Case Examples Using Price Rules and QCP Use Case 1: Volume-Based Tiered Discounting Apply different discount percentages based on quantity ranges. For example: Label Minimum_Quantity__c Maximum_Quantity__c Discount_Percentage__c Tier 1 1 10 0 Tier 2 11 50 5 Tier 3 51 100 10 Tier 4 101 999999 15 Price Rule Implementation Use Price Rules with Lookup Query objects to define tiers and corresponding discounts. Create New Price Rule: Name: Volume-Based Tiered DiscountActive: TrueEvaluation Event: On CalculateCalculator: Default CalculatorConditions Met: AllAdd Lookup Query to Price Rule: Name: Volume Discount Tier LookupLookup Object: Volume Discount Tier (the above table represents this Lookup Object)Match Type: SingleInput Field: QuantityOperator: BetweenLow-Value Field: Minimum_Quantity__cHigh-Value Field: Maximum_Quantity__cReturn Field: Discount_Percentage__cAdd Price Action to Price Rule: Type: Discount (Percent)Value Source: LookupLookup Object: Volume Discount Tier LookupSource Variable: Return ValueTarget Object: LineTarget Field: Discount With this configuration, any number of discount tiers could be supported as per the volume being ordered. Lookup tables/objects provide a great way to handle a dynamic pricing framework. QCP Implementation Now, let's see how the same use case can be implemented with the QCP script. The code can be invoked with Before/On/After calculating events as per the need of the use case. JavaScript function applyVolumeTieredDiscount(lineItems) { lineItems.forEach(item => { let discount = 0; if (item.Quantity > 100) { discount = 15; } else if (item.Quantity > 50) { discount = 10; } else if (item.Quantity > 10) { discount = 5; } item.Discount = discount; }); } Use Case 2: Bundle Pricing Offer special pricing when specific products are purchased together. For instance, a computer, monitor, and keyboard might have a lower total price when bought as a bundle vs individual components. Price Rule Implementation Create Product Bundles and use Price Rules to apply discounts when all components are present in the quote. Create a new Price Rule: Name: Bundle DiscountActive: TrueEvaluation Event: On CalculateCalculator: Default CalculatorConditions Met: AllAdd Price Conditions: Condition 1: Field: Product CodeOperator: EqualsFilter Value: PROD-ACondition 2: Field: Quote.Line Items.Product CodeOperator: ContainsFilter Value: PROD-BCondition 3: Field: Quote.Line Items.Product CodeOperator: ContainsFilter Value: PROD-CAdd Price Action: Type: Discount (Absolute) Value: 100 // $100 discount for the bundleApply To: GroupApply Immediately: True QCP Implementation JavaScript function applyBundlePricing(lineItems) { const bundleComponents = ['Product A', 'Product B', 'Product C']; const allComponentsPresent = bundleComponents.every(component => lineItems.some(item => item.Product.Name === component) ); if (allComponentsPresent) { const bundleDiscount = 100; // $100 discount for the bundle lineItems.forEach(item => { if (bundleComponents.includes(item.Product.Name)) { item.Additional_Discount__c = bundleDiscount / bundleComponents.length; } }); } } Use Case 3: Cross-Product Conditional Discounting Apply discounts on one product based on the purchase of another. For example, offer a 20% discount on software licenses if the customer buys a specific hardware product. Price Rule Implementation Use Price Conditions to check for the presence of the conditional product and Price Actions to apply the discount on the target product. Create a new Price Rule: Name: Product Y DiscountActive: TrueEvaluation Event: On CalculateCalculator: Default CalculatorConditions Met: AllAdd Price Conditions: Condition 1: Field: Product CodeOperator: EqualsFilter Value: PROD-YCondition 2: Field: Quote.Line Items.Product CodeOperator: ContainsFilter Value: PROD-XAdd Price Action: Type: Discount (Percent)Value: 20Apply To: LineApply Immediately: True QCP Implementation JavaScript function applyCrossProductDiscount(lineItems) { const hasProductX = lineItems.some(item => item.Product.Name === 'Product X'); if (hasProductX) { lineItems.forEach(item => { if (item.Product.Name === 'Product Y') { item.Discount = 20; } }); } } Use Case 4: Time-Based Pricing Adjust prices based on subscription length or contract duration. For instance, offer a 10% discount for 2-year contracts and 15% for 3-year contracts. Price Rule Implementation Use Quote Term fields and Price Rules to apply discounts based on the contract duration. This use case demonstrates the use of another important feature, the Price Action Formula. Create a new Price Rule: Name: Contract Duration DiscountActive: TrueEvaluation Event: On CalculateCalculator: Default CalculatorConditions Met: AllAdd Price Condition: (to avoid invocation of price action for every calculation) Type: CustomAdvanced Condition: Quote.Subscription_Term__c >= 24Add Price Action: Type: Discount (Percent)Value Source: FormulaApply To: LineApply Immediately: TrueFormula: JavaScript CASE( FLOOR(Quote.Subscription_Term__c / 12), 2, 10, 3, 15, 4, 20, 5, 25, 0 ) This approach offers several advantages: It combines multiple tiers into a single price rule, making it easier to manage.It's more flexible and can easily accommodate additional tiers by adding more cases to the formula.It uses a formula-based approach, which can be modified without needing to create multiple price rules for each tier. QCP Implementation JavaScript function applyTimeBasedPricing(quote, lineItems) { const contractDuration = quote.Contract_Duration_Months__c; let discount = 0; if (contractDuration >= 36) { discount = 15; } else if (contractDuration >= 24) { discount = 10; } lineItems.forEach(item => { item.Additional_Discount__c = discount; }); } Use Case 5: Customer/Market Segment-Specific Pricing Set different prices for various customer categories. For example, enterprise customers might get a 25% discount, while SMBs get a 10% discount. Price Rule Implementation Use Account fields to categorize customers and Price Rules to apply segment-specific discounts. Create a new Price Rule: Name: Customer Segment DiscountActive: TrueEvaluation Event: On CalculateCalculator: Default CalculatorConditions Met: AllAdd Price Condition: Type: CustomAdvanced Condition: Quote.Account.Customer_Segment__c is not blankAdd Price Action: Type: Discount (Percent)Value Source: FormulaApply To: LineApply Immediately: TrueFormula: JavaScript CASE( Quote.Account.Customer_Segment__c, 'Enterprise', 25, 'Strategic', 30, 'SMB', 10, 'Startup', 5, 'Government', 15, 0 ) QCP Implementation JavaScript function applyCustomerSegmentPricing(quote, lineItems) { const customerSegment = quote.Account.Customer_Segment__c; let discount = 0; switch (customerSegment) { case 'Enterprise': discount = 25; break; case 'SMB': discount = 10; break; } lineItems.forEach(item => { item.Additional_Discount__c = discount; }); } Use Case 6: Competitive Pricing Rules Automatically adjust prices based on competitors' pricing data. For instance, always price your product 5% below a specific competitor's price. Price Rule Implementation Create custom fields to store competitor pricing data on the product object and use Price Rules with formula fields to calculate and apply the adjusted price. Create a new Price Rule: Name: Competitive PricingActive: TrueEvaluation Event: On CalculateCalculator: Default CalculatorConditions Met: AllAdd Price Condition: Field: Competitor_Price__cOperator: Is Not NullAdd Price Actions: Action 1: Type: CustomValue Field: Competitor_Price__c * 0.95Target Field: Special_Price__cAction 2 (to ensure price doesn't go below floor price): Type: PriceValue Source: FormulaFormula: MAX(Special_Price__c, Floor_Price__c)Target Field: Special_Price__c QCP Implementation JavaScript function applyCompetitivePricing(lineItems) { lineItems.forEach(item => { if (item.Competitor_Price__c) { const ourPrice = item.Competitor_Price__c * 0.95; // 5% below competitor const minimumPrice = item.Floor_Price__c || item.ListPrice * 0.8; // 20% below list price as floor item.Special_Price__c = Math.max(ourPrice, minimumPrice); } }); } Use Case 7: Multi-Currency Pricing Apply different pricing rules based on the currency used in the transaction. For example, offer a 5% discount for USD transactions but a 3% discount for EUR transactions. The discounted prices can be maintained directly in the Pricebook entry of a particular product however, the price rules can extend the conditional logic further to add a dynamic pricing element based on various conditions based on quote and quote line-specific data. Price Rule Implementation Use the Multi-Currency feature in Salesforce and create Price Rules that consider the Quote Currency field. The lookup table approach will provide further flexibility to the approach. Label Currency_Code__c Discount_Percentage__c USD USD 5 EUR EUR 3 GBP GBP 4 JPY JPY 2 CAD CAD 4.5 AUD AUD 3.5 CHF CHF 2.5 Create Price Rule Name: Multi-Currency DiscountActive: TrueEvaluation Event: On CalculateCalculator: Default CalculatorConditions Met: AllAdd Lookup Query to Price Rule (above table represents the structure of Currency Discount object) Name: Currency Discount LookupLookup Object: Currency DiscountMatch Type: SingleInput Field: CurrencyIsoCodeOperator: EqualsComparison Field: Currency_Code__cReturn Field: Discount_Percentage__cAdd Price Action to Price Rule Type: Discount (Percent)Value Source: LookupLookup Object: Currency Discount LookupSource Variable: Return ValueTarget Object: LineTarget Field: Discount QCP Implementation JavaScript function applyMultiCurrencyPricing(quote, lineItems) { const currency = quote.CurrencyIsoCode; let discount = 0; switch (currency) { case 'USD': discount = 5; break; case 'EUR': discount = 3; break; } //add more currencies as needed lineItems.forEach(item => { item.Additional_Discount__c = discount; }); } Use Case 8: Margin-Based Pricing Dynamically adjust prices to maintain a specific profit margin. For instance, ensure a minimum 20% margin on all products. Price Rule Implementation Create custom fields for cost data and use Price Rules with formula fields to calculate and enforce minimum prices based on desired margins. Create a new Price Rule: Name: Minimum MarginActive: TrueEvaluation Event: On CalculateCalculator: Default CalculatorConditions Met: AllAdd Price Condition: Field: (List Price - Cost__c) / List PriceOperator: Less ThanFilter Value: 0.20Add Price Action: Type: CustomValue Field: Cost__c / (1 - 0.20)Target Field: Special_Price__c QCP Implementation JavaScript function applyMarginBasedPricing(lineItems) { const desiredMargin = 0.20; // 20% margin lineItems.forEach(item => { if (item.Cost__c) { const minimumPrice = item.Cost__c / (1 - desiredMargin); if (item.NetPrice < minimumPrice) { item.Special_Price__c = minimumPrice; } } }); } Use Case 9: Geolocation-Based Pricing Set different prices based on the customer's geographical location. Geolocation-based pricing with multiple levels. Apply different pricing adjustments based on the following hierarchy. Price Rule Implementation Use Account, User, or Quote fields to store location data and create Price Rules that apply location-specific adjustments. Label Sales_Region__c Area__c Sub_Area__c Price_Adjustment__c NA_USA_CA North America USA California 1.1 NA_USA_NY North America USA New York 1.15 NA_Canada North America Canada null 1.05 EU_UK_London Europe UK London 1.2 EU_Germany Europe Germany null 1.08 APAC_Japan Asia-Pacific Japan null 1.12 Create the Price Rule: Name: Geolocation Based PricingActive: TrueEvaluation Event: On CalculateCalculator: Default CalculatorConditions Met: AllAdd Lookup Query to Price Rule Name: Geo Pricing LookupLookup Object: Geo PricingMatch Type: SingleInput Field 1: Quote.Account.Sales_Region__c Operator: EqualsComparison Field: Sales_Region__cInput Field 2: Quote.Account.BillingCountry Operator: EqualsComparison Field: Area__cInput Field 3: Quote.Account.BillingState Operator: EqualsComparison Field: Sub_Area__cReturn Field: Price_Adjustment__cAdd Price Action to Price Rule Type: Percent Of ListValue Source: LookupLookup Object: Geo Pricing LookupSource Variable: Return ValueTarget Object: LineTarget Field: Special Price QCP Implementation JavaScript export function onBeforeCalculate(quote, lines, conn) { applyGeoPricing(quote, lines); } function applyGeoPricing(quote, lines) { const account = quote.record.Account; const salesRegion = account.Sales_Region__c; const area = account.BillingCountry; const subArea = account.BillingState; // Fetch the geo pricing adjustment const geoPricing = getGeoPricing(salesRegion, area, subArea); if (geoPricing) { lines.forEach(line => { line.record.Special_Price__c = line.record.ListPrice * geoPricing.Price_Adjustment__c; }); } } function getGeoPricing(salesRegion, area, subArea) { // This is a simplified version. In a real scenario, you'd query the Custom Metadata Type. // For demonstration, we're using a hardcoded object. const geoPricings = [ { Sales_Region__c: 'North America', Area__c: 'USA', Sub_Area__c: 'California', Price_Adjustment__c: 1.10 }, { Sales_Region__c: 'North America', Area__c: 'USA', Sub_Area__c: 'New York', Price_Adjustment__c: 1.15 }, { Sales_Region__c: 'North America', Area__c: 'Canada', Sub_Area__c: null, Price_Adjustment__c: 1.05 }, { Sales_Region__c: 'Europe', Area__c: 'UK', Sub_Area__c: 'London', Price_Adjustment__c: 1.20 }, { Sales_Region__c: 'Europe', Area__c: 'Germany', Sub_Area__c: null, Price_Adjustment__c: 1.08 }, { Sales_Region__c: 'Asia-Pacific', Area__c: 'Japan', Sub_Area__c: null, Price_Adjustment__c: 1.12 } ]; // Find the most specific match return geoPricings.find(gp => gp.Sales_Region__c === salesRegion && gp.Area__c === area && gp.Sub_Area__c === subArea ) || geoPricings.find(gp => gp.Sales_Region__c === salesRegion && gp.Area__c === area && gp.Sub_Area__c === null ) || geoPricings.find(gp => gp.Sales_Region__c === salesRegion && gp.Area__c === null && gp.Sub_Area__c === null ); } Use Case 10: Usage-Based Pricing Implement complex calculations for pricing based on estimated or actual usage. For instance, price cloud storage based on projected data volume and access frequency. Price Rule Implementation A tiered pricing model for a cloud storage service based on the estimated monthly usage. The pricing will have a base price and additional charges for usage tiers. This implementation has another variety approach of leveraging custom metadata and configuration settings along with native price rule functionalities. Pricing Model: Base Price: $100 per month0-1000 GB: Included in base price1001-5000 GB: $0.05 per GB5001-10000 GB: $0.04 per GB10001+ GB: $0.03 per GB Step 1: Create Custom Metadata Type in Salesforce setup: Go to Setup > Custom Metadata TypesClick "New Custom Metadata Type"Label: Usage Pricing TierPlural Label: Usage Pricing TiersObject Name: Usage_Pricing_Tier__mdtAdd custom fields: Minimum_Usage__c (Number)Maximum_Usage__c (Number)Price_Per_GB__c (Currency) Step 2: Add records to the Custom Metadata Type: Label Minimum_Usage__c Maximum_Usage__c Price_Per_GB__c Tier 1 0 1000 0 Tier 2 1001 5000 0.05 Tier 3 5001 10000 0.04 Tier 4 10001 999999999 0.03 Create the Price Rule: Name: Usage-Based PricingActive: TrueEvaluation Event: On CalculateCalculator: Default CalculatorConditions Met: AllAdd Price Condition Field: Product.Pricing_Model__cOperator: EqualsFilter Value: Usage-BasedAdd Lookup Query to Price Rule Name: Usage Pricing Tier LookupLookup Object: Usage Pricing TierMatch Type: SingleInput Field: Estimated_Monthly_Usage__cOperator: BetweenLow-Value Field: Minimum_Usage__cHigh-Value Field: Maximum_Usage__cReturn Field: Price_Per_GB__cAdd Price Action to Price Rule Type: CustomValue Source: FormulaTarget Object: LineTarget Field: Special_Price__cFormula: JavaScript 100 + (MAX(Estimated_Monthly_Usage__c - 1000, 0) * Usage_Pricing_Tier_Lookup.Price_Per_GB__c) QCP Implementation JavaScript export function onBeforeCalculate(quote, lines, conn) { applyUsageBasedPricing(quote, lines); } function applyUsageBasedPricing(quote, lines) { lines.forEach(line => { if (line.record.Product__r.Pricing_Model__c === 'Usage-Based') { const usage = line.record.Estimated_Monthly_Usage__c || 0; const basePrice = 100; let additionalCost = 0; if (usage > 1000) { additionalCost += calculateTierCost(usage, 1001, 5000, 0.05); } if (usage > 5000) { additionalCost += calculateTierCost(usage, 5001, 10000, 0.04); } if (usage > 10000) { additionalCost += calculateTierCost(usage, 10001, usage, 0.03); } line.record.Special_Price__c = basePrice + additionalCost; } }); } function calculateTierCost(usage, tierStart, tierEnd, pricePerGB) { const usageInTier = Math.min(usage, tierEnd) - tierStart + 1; return Math.max(usageInTier, 0) * pricePerGB; } // Optional: Add a function to provide usage tier information to the user export function onAfterCalculate(quote, lines, conn) { lines.forEach(line => { if (line.record.Product__r.Pricing_Model__c === 'Usage-Based') { const usage = line.record.Estimated_Monthly_Usage__c || 0; const tierInfo = getUsageTierInfo(usage); line.record.Usage_Tier_Info__c = tierInfo; } }); } function getUsageTierInfo(usage) { if (usage <= 1000) { return 'Tier 1: 0-1000 GB (Included in base price)'; } else if (usage <= 5000) { return 'Tier 2: 1001-5000 GB ($0.05 per GB)'; } else if (usage <= 10000) { return 'Tier 3: 5001-10000 GB ($0.04 per GB)'; } else { return 'Tier 4: 10001+ GB ($0.03 per GB)'; } } Likewise, there are a plethora of use cases that can be implemented using price rule configuration. The recommendation is to always use a declarative approach before turning to QCP, which is specifically available as an extension to the price rule engine. Note: The rules and scripts above are not compiled. They are added as a demonstration for explanation purposes. Conclusion Salesforce CPQ's Price Rules and Quote Calculator Plugin (QCP) offer a powerful combination for implementing dynamic pricing strategies. Price Rules provide a declarative approach for straightforward pricing logic, while QCP enables complex, programmatic calculations. When used with Custom Metadata Types/Custom lookup objects, these tools create a flexible, scalable, and easily maintainable pricing system. Together, they can address a wide range of pricing needs, from simple to highly sophisticated, allowing businesses to adapt quickly to market changes and implement nuanced pricing strategies. This versatility enables organizations to optimize sales processes, improve profit margins, and respond effectively to diverse customer needs within the Salesforce CPQ ecosystem.
Ever wondered how Netflix keeps you glued to your screen with uninterrupted streaming bliss? Netflix Architecture is responsible for the smooth streaming experience that attracts viewers worldwide behind the scenes. Netflix's system architecture emphasizes how important it is to determine how content is shaped in the future. Join us on a journey behind the scenes of Netflix’s streaming universe! Netflix is a term that means entertainment, binge-watching, and cutting-edge streaming services. Netflix’s rapid ascent to popularity may be attributed to its vast content collection, worldwide presence, and resilient and inventive architecture. From its start as a DVD rental service in 1997 to its development into a major worldwide streaming company, Netflix has consistently used cutting-edge technology to revolutionize media consumption. Netflix Architecture is designed to efficiently and reliably provide content to millions of consumers at once. The scalability of Netflix’s infrastructure is critical, given its 200 million+ members across more than 190 countries. So, let’s delve into the intricacies of Netflix Architecture and uncover how it continues shaping how we enjoy our favorite shows and movies. Why Understand Netflix System Architecture? It’s important to understand Netflix System Architecture for several reasons. Above all, it sheds light on how Netflix accommodates millions of customers throughout the globe with a flawless streaming experience. We can learn about the technology and tactics that underlie its success better by exploring the nuances of this architecture. Furthermore, other industries can benefit from using Netflix’s design as a blueprint for developing scalable, reliable, and efficient systems. Its design principles and best practices can teach us important lessons about building and optimizing complicated distributed systems. We may also recognize the continual innovation driving the development of digital media consumption by understanding Netflix’s Architecture. Understanding the Requirements for System Design System design is crucial in developing complex software or technological infrastructure. These specifications act as the basis around which the entire system is constructed, driving choices and forming the end product. However, what are the prerequisites for system design, and what makes them crucial? Let’s explore. Functional Requirements The system’s functional requirements specify the features, functions, and capabilities that it must include. These specifications outline the system’s main objective and detail how various parts or modules interact. Functional requirements for a streaming platform like Netflix, for instance, could encompass the following, including but not limited to: Account creation: Users should be able to create accounts easily, providing necessary information for registration.User login: Registered users should have the ability to securely log in to their accounts using authentication credentials.Content suggestion: The platform should offer personalized content suggestions based on user preferences, viewing history, and other relevant data.Video playback capabilities: Users should be able to stream videos seamlessly, with options for playback controls such as play, pause, rewind, and fast forward. Non-Functional Requirements Non-functional requirements define the system’s behavior under different scenarios and ensure that it satisfies certain quality requirements. They cover performance, scalability, dependability, security, and compliance aspects of the system. Non-functional requirements for a streaming platform like Netflix, for instance, could include but are not limited to: Performance requirements: During periods of high utilization, the system must maintain low latency and high throughput.Compliance requirements: Regarding user data protection, the platform must abide by Data Protection Regulations standards.Scalability requirements: The infrastructure must be scalable to handle growing user traffic without sacrificing performance.Security requirements: To prevent unwanted access to user information, strong authentication and encryption procedures must be put in place.Reliability and availability requirements: For uninterrupted service delivery, the system needs to include failover methods and guarantee high uptime. Netflix Architecture: Embracing Cloud-Native After a significant setback due to database corruption in August 2008, Netflix came to the crucial conclusion that it was necessary to move away from single points of failure and towards highly dependable, horizontally scalable, cloud-based solutions. Netflix started a revolutionary journey by selecting Amazon Web Services (AWS) as its cloud provider and moving most of its services to the cloud by 2015. Following seven years of intensive work, the cloud migration was finished in early January 2016, which meant that the streaming service’s last remaining data center components were shut down. But getting to the cloud wasn’t a simple task. Netflix adopted a cloud-native strategy, completely overhauling its operational model and technological stack. This required embracing NoSQL databases, denormalizing their data model, and moving from a monolithic application to hundreds of microservices. Changes in culture were also necessary, such as adopting DevOps procedures, continuous delivery, and a self-service engineering environment. Despite the difficulties, this shift has made Netflix a cloud-native business that is well-positioned for future expansion and innovation in the rapidly changing field of online entertainment. Netflix Architectural Triad A strong architectural triad — the Client, Backend, and Content Delivery Network (CDN) — is responsible for Netflix’s flawless user experience. With millions of viewers globally, each component is essential to delivering content. Client The client-side architecture lies at the heart of the Netflix experience. This includes the wide range of devices users use to access Netflix, such as computers, smart TVs, and smartphones. Netflix uses a mix of web interfaces and native applications to ensure a consistent user experience across different platforms. Regardless of the device, these clients manage playback controls, user interactions, and interface rendering to deliver a unified experience. Users may easily browse the extensive content library and enjoy continuous streaming thanks to the client-side architecture’s responsive optimization. Netflix Architecture: Backend Backend architecture is the backbone of Netflix’s behind-the-scenes operations. The management of user accounts, content catalogs, recommendation algorithms, billing systems, and other systems is done by a complex network of servers, databases, and microservices. In addition to handling user data and coordinating content delivery, the backend processes user requests. Furthermore, the backend optimizes content delivery and personalizes recommendations using state-of-the-art technologies like big data analytics and machine learning, which raises user satisfaction and engagement. The backend architecture of Netflix has changed significantly over time. It moved to cloud infrastructure in 2007 and adopted Spring Boot as its primary Java framework in 2018. When combined with the scalability and dependability provided by AWS (Amazon Web Services), proprietary technologies like Ribbon, Eureka, and Hystrix have been crucial in effectively coordinating backend operations. Netflix Architecture: Content Delivery Network The Content Delivery Network completes Netflix Architectural Triangle. A Content Delivery Network (CDN) is a strategically positioned global network of servers that aims to deliver content to users with optimal reliability and minimum delay. Netflix runs a Content Delivery Network (CDN) called Open Connect. It reduces buffering and ensures smooth playback by caching and serving material from sites closer to users. Even during times of high demand, Netflix reduces congestion and maximizes bandwidth utilization by spreading content over numerous servers across the globe. This decentralized method of content delivery improves global viewers’ watching experiences, also lowering buffering times and increasing streaming quality. Client-Side Components Web Interface Over the past few years, Netflix’s Web Interface has seen a considerable transformation, switching from Silverlight to HTML5 to stream premium video content. With this change, there would be no need to install and maintain browser plug-ins, which should simplify the user experience. Netflix has increased its compatibility with a wide range of online browsers and operating systems, including Chrome OS, Chrome, Internet Explorer, Safari, Opera, Firefox, and Edge, since the introduction of HTML5 video. Netflix’s use of HTML5 extends beyond simple playback. The platform has welcomed HTML5 adoption as an opportunity to support numerous industry standards and technological advancements. Mobile Applications The extension of Netflix’s streaming experience to users of smartphones and tablets is made possible via its mobile applications. These applications guarantee that users may access their favorite material while on the road. They are available on multiple platforms, including iOS and Android. By utilizing a combination of native development and platform-specific optimizations, Netflix provides a smooth and user-friendly interface for a wide range of mobile devices. With features like personalized recommendations, seamless playback, and offline downloading, Netflix’s mobile applications meet the changing needs of viewers on the go. Users of the Netflix mobile app may enjoy continuous viewing of their favorite series and films while driving, traveling, or just lounging around the house. Netflix is committed to providing a captivating and delightful mobile viewing experience with frequent upgrades and improvements. Smart TV Apps The Gibbon rendering layer, a JavaScript application for dynamic updates, and a native Software Development Kit (SDK) comprise the complex architecture upon which the Netflix TV Application is based. The application guarantees fluid UI rendering and responsiveness across multiple TV platforms by utilizing React-Gibbon, a customized variant of React. Prioritizing performance optimization means focusing on measures such as frames per second and key input responsiveness. Rendering efficiency is increased by methods like prop iteration reduction and inline component creation; performance is further optimized by style optimization and custom component development. With a constant focus on enhancing the TV app experience for consumers across many platforms, Netflix cultivates a culture of performance excellence. Revamping the Playback Experience: A Journey Towards Modernization Netflix has completely changed how people watch and consume digital media over the last ten years. But even though the streaming giant has been releasing cutting-edge features regularly, the playback interface’s visual design and user controls haven’t changed much since 2013. After realizing that the playback user interface needed to be updated, the Web UI team set out to redesign it. The team’s three main canvases were Pre Play, Video Playback, and Post Play. Their goal was to increase customer pleasure and engagement. By utilizing technologies like React.js and Redux to expedite development and enhance performance, Netflix revolutionized its playback user interface Netflix Architecture: Backend Infrastructure Content Delivery Network (CDN) Netflix’s infrastructure depends on its Content Delivery Network (CDN), additionally referred to as Netflix Open Connect, which allows content to be delivered to millions of viewers globally with ease. Globally distributed, the CDN is essential to ensuring that customers in various locations receive high-quality streaming content. The way Netflix Open Connect CDN works is that servers, called Open Connect Appliances (OCAs), are positioned strategically so that they are near Internet service providers (ISPs) and their users. When content delivery is at its peak, this proximity reduces latency and guarantees effective performance. Netflix is able to maximize bandwidth utilization and lessen its dependence on costly backbone capacity by pre-positioning content within ISP networks, which improves the total streaming experience. Scalability is one of Netflix’s CDN’s primary features. With OCAs installed in about 1,000 locations across the globe, including isolated locales like islands and the Amazon rainforest, Netflix is able to meet the expanding demand for streaming services across a wide range of geographic areas. Additionally, Netflix grants OCAs to qualified ISPs so they can offer Netflix content straight from their networks. This strategy guarantees improved streaming for subscribers while also saving ISPs’ running expenses. Netflix cultivates a win-win relationship with ISPs by providing localized content distribution and collaborating with them, which enhances the streaming ecosystem as a whole. Transforming Video Processing: The Microservices Revolution at Netflix By implementing microservices, Netflix has transformed its video processing pipeline, enabling unmatched scalability and flexibility to satisfy the needs of studio operations as well as member streaming. With the switch to the microservices-based platform from the monolithic platform, a new age of agility and feature development velocity was brought in. Each step of the video processing workflow is represented by a separate microservice, allowing for simplified orchestration and decoupled functionality. Together, these services—which range from video inspection to complexity analysis and encoding—produce excellent video assets suitable for studio and streaming use cases. Microservices have produced noticeable results by facilitating quick iteration and adaptation to shifting business requirements. Playback Process in Netflix Open Connect Worldwide customers can enjoy a flawless and excellent viewing experience thanks to Netflix Open Connect’s playback procedure. It functions as follows: Health reporting: Open Connect Appliances (OCAs) report to the cache control services in Amazon Web Services (AWS) on a regular basis regarding their learned routes, content availability, and overall health.User request: From the Netflix application hosted on AWS, a user on a client device requests that a TV show or movie be played back.Authorization and file selection: After verifying user authorization and licensing, the AWS playback application services choose the precise files needed to process the playback request.Steering service: The AWS steering service chooses which OCAs to serve files from based on the data that the cache control service has saved. The playback application services receive these OCAs from it when it constructs their URLs.Content delivery: On the client device, the playback application services send the URLs of the relevant OCAs. When the requested files are sent to the client device over HTTP/HTTPS, the chosen OCA starts serving them. Below is a visual representation demonstrating the playback process: Databases in Netflix Architecture Leveraging Amazon S3 for Seamless Media Storage Netflix’s ability to withstand the April 21, 2022, AWS outage demonstrated the value of its cloud infrastructure, particularly its reliance on Amazon S3 for data storage. Netflix’s systems were built to endure such outages by leveraging services like SimpleDB, S3, and Cassandra. Netflix’s infrastructure is built on the foundation of its use of Amazon S3 (Simple Storage Service) for media storage, which powers the streaming giant’s huge collection of films, TV series, and original content. Petabytes of data are needed to service millions of Netflix users worldwide, and S3 is the perfect choice for storing this data since it offers scalable, reliable, and highly accessible storage. Another important consideration that led Netflix to select S3 for media storage is scalability. With S3, Netflix can easily expand its storage capacity without having to worry about adding more hardware or maintaining complicated storage infrastructure as its content collection grows. To meet the growing demand for streaming content without sacrificing user experience or speed, Netflix needs to be scalable. Embracing NoSQL for Scalability and Flexibility The need for structured storage access throughout a highly distributed infrastructure drives Netflix’s database selection process. Netflix adopted the paradigm shift towards NoSQL distributed databases after realizing the shortcomings of traditional relational models in the context of Internet-scale operations. In their database ecosystem, three essential NoSQL solutions stand out: Cassandra, Hadoop/HBase, and SimpleDB. Amazon SimpleDB As Netflix moved to the AWS cloud, SimpleDB from Amazon became an obvious solution for many use cases. It was appealing because of its powerful query capabilities, automatic replication across availability zones, and durability. SimpleDB’s hosted solution reduced operational overhead, which is in line with Netflix’s policy of using cloud providers for non-differentiated operations. Apache HBase Apache HBase evolved as a practical, high-performance solution for Hadoop-based systems. Its dynamic partitioning strategy makes it easier to redistribute load and create clusters, which is crucial for handling Netflix’s growing volume of data. HBase’s robust consistency architecture is enhanced by its support for distributed counters, range queries, and data compression, which makes it appropriate for a variety of use cases. Apache Cassandra The open-source NoSQL database Cassandra provides performance, scalability, and flexibility. Its dynamic cluster growth and horizontal scalability meet Netflix’s requirement for unlimited scale. Because of its adaptable consistency, replication mechanisms, and flexible data model, Cassandra is perfect for cross-regional deployments and scaling without single points of failure. Since each NoSQL tool is best suited for a certain set of use cases, Netflix has adopted a number of them. While Cassandra excels in cross-regional deployments and fault-tolerant scaling, HBase connects with the Hadoop platform naturally. A learning curve and operational expense accompany a pillar of Netflix’s long-term cloud strategy, NoSQL adoption, but the benefits in terms of scalability, availability, and performance make the investment worthwhile. MySQL in Netflix’s Billing Infrastructure Netflix’s billing system experienced a major transformation as part of its extensive migration to AWS cloud-native architecture. Because Netflix relies heavily on billing for its operations, the move to AWS was handled carefully to guarantee that there would be as little of an impact on members’ experiences as possible and that strict financial standards would be followed. Tracking billing periods, monitoring payment statuses, and providing data to financial systems for reporting are just a few of the tasks that Netflix’s billing infrastructure handles. The billing engineering team managed a complicated ecosystem that included batch tasks, APIs, connectors with other services, and data management to accomplish these functionalities. The selection of database technology was one of the most important choices made during the move. MySQL was chosen as the database solution due to the need for scalability and the requirement for ACID transactions in payment processing. Building robust tooling, optimizing code, and removing unnecessary data were all part of the migration process in order to accommodate the new cloud architecture. Before transferring the current member data, a thorough testing process using clean datasets was carried out using proxies and redirectors to handle traffic redirection. It was a complicated process to migrate to MySQL on AWS; it required careful planning, methodical implementation, and ongoing testing and iteration. In spite of the difficulties, the move went well, allowing Netflix to use the scalability and dependability of AWS cloud services for its billing system. In summary, switching Netflix’s billing system to MySQL on AWS involved extensive engineering work and wide-ranging effects. Netflix's system architecture has updated its billing system and used cloud-based solutions to prepare for upcoming developments in the digital space. Here is Netflix’s post-migration architecture: Content Processing Pipeline in Netflix Architecture The Netflix content processing pipeline is a systematic approach for handling digital assets that are provided by partners in content and fulfillment. The three main phases are ingestion, transcoding, and packaging. Ingestion Source files, such as audio, timed text, or video, are thoroughly examined for accuracy and compliance throughout the ingestion stage. These verifications include semantic signal domain inspections, file format validation, decodability of compressed bitstreams, compliance with Netflix delivery criteria, and the integrity of data transfer. Transcoding and Packaging The sources go through transcoding to produce output elementary streams when they make it beyond the ingestion stage. After that, these streams are encrypted and placed in distribution-ready streamable containers. Ensuring Seamless Streaming With Netflix’s Canary Model Since client applications are the main way users engage with a brand, they must be of excellent quality for global digital products. At Netflix's system architecture, significant amounts of money are allocated towards guaranteeing thorough evaluation of updated application versions. Nevertheless, thorough internal testing becomes difficult because Netflix is accessible on thousands of devices and is powered by hundreds of independently deployed microservices. As a result, it is crucial to support release decisions with solid field data acquired during the update process. To expedite the assessment of updated client applications, Netflix’s system architecture has formed a specialized team to mine health signals from the field. Development velocity increased as a result of this system investment, improving application quality and development procedures. Client applications: There are two ways that Netflix upgrades its client apps: through direct downloads and app store deployments. Distribution control is increased with direct downloads.Deployment strategies: Although the advantages of regular, incremental releases for client apps are well known, updating software presents certain difficulties. Since every user’s device delivers data in a stream, efficient signal sampling is crucial. The deployment strategies employed by Netflix are customized to tackle the distinct challenges posed by a wide range of user devices and complex microservices. The strategy differs based on the kind of client — for example, smart TVs vs mobile applications. New client application versions are progressively made available through staged rollouts, which provide prompt failure handling and intelligent backend service scaling. During rollouts, keeping an eye on client-side error rates and adoption rates guarantees consistency and effectiveness in the deployment procedure.Staged rollouts: To reduce risks and scale backend services wisely, staged rollouts entail progressively deploying new software versions. AB tests/client canaries: Netflix employs an intense variation of A/B testing known as “Client Canaries,” which involves testing complete apps to guarantee timely upgrades within a few hours.Orchestration: Orchestration lessens the workload associated with frequent deployments and analysis. It is useful for managing A/B tests and client canaries. In summary, millions of customers may enjoy flawless streaming experiences thanks to Netflix’s use of the client canary model, which guarantees frequent app updates. Netflix Architecture Diagram Netflix system Architecture is a complex ecosystem made up of Python and Java with Spring Boot for backend services, and Apache Kafka and Flink for data processing and real-time event streaming. Redux, React.js, and HTML5 on the front end provide a captivating user experience. Numerous databases offer real-time analytics and handle enormous volumes of media content, including Cassandra, HBase, SimpleDB, MySQL, and Amazon S3. Jenkins and Spinnaker help with continuous integration and deployment, and AWS powers the entire infrastructure with scalability, dependability, and global reach. Netflix’s dedication to providing flawless entertainment experiences to its vast worldwide audience is demonstrated by the fact that these technologies only make up a small portion of its huge tech stack. Conclusion of Netflix Architecture Netflix System Architecture has revolutionized the entertainment industry. Throughout its evolution from a DVD rental service to a major worldwide streaming player, Netflix’s technological infrastructure has been essential to its success. Netflix Architecture, supported by Amazon Web Services (AWS), guarantees uninterrupted streaming for a global user base. Netflix ensures faultless content delivery across devices with its Client, Backend, and Content Delivery Network (CDN). The innovative usage of HTML5 and personalized suggestions by Netflix System Architecture improves user experience. Despite some obstacles along the way, Netflix came out stronger after making the switch to a cloud-native setup. In the quickly evolving field of online entertainment, Netflix has positioned itself for future development and innovation by embracing microservices, NoSQL databases, and cloud-based solutions. Any tech venture can benefit from understanding Netflix's system. Put simply, Netflix's System Architecture aims to transform the way we consume media — it’s not just about technology. This architecture secretly makes sure that everything runs well when viewers binge-watch, increasing everyone’s enjoyment of the entertainment.
Abhishek Gupta
Principal Developer Advocate,
AWS
Daniel Oh
Senior Principal Developer Advocate,
Red Hat
Pratik Prakash
Principal Solution Architect,
Capital One